Test Report: Hyper-V_Windows 17761

                    
                      4145ffc8c3ff629bd64b588eb0db70699e9f5232:2023-12-13:32257
                    
                

Test fail (19/206)

x
+
TestOffline (557.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-622300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-622300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: exit status 90 (8m1.4012638s)

                                                
                                                
-- stdout --
	* [offline-docker-622300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node offline-docker-622300 in cluster offline-docker-622300
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=172.16.1.1:1
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:58:09.745078   13836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 23:58:09.866849   13836 out.go:296] Setting OutFile to fd 680 ...
	I1212 23:58:09.875858   13836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:09.875858   13836 out.go:309] Setting ErrFile to fd 988...
	I1212 23:58:09.875858   13836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:09.903925   13836 out.go:303] Setting JSON to false
	I1212 23:58:09.908371   13836 start.go:128] hostinfo: {"hostname":"minikube7","uptime":79087,"bootTime":1702346402,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:58:09.908371   13836 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:58:09.910640   13836 out.go:177] * [offline-docker-622300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:58:09.911994   13836 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:58:09.911404   13836 notify.go:220] Checking for updates...
	I1212 23:58:09.913970   13836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:58:09.915459   13836 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:58:09.918395   13836 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:58:09.919747   13836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:58:09.923706   13836 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:58:09.924114   13836 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:58:16.478845   13836 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:58:16.479682   13836 start.go:298] selected driver: hyperv
	I1212 23:58:16.479682   13836 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:58:16.479682   13836 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:58:16.535592   13836 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:58:16.537514   13836 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:58:16.537514   13836 cni.go:84] Creating CNI manager for ""
	I1212 23:58:16.537514   13836 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 23:58:16.537514   13836 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:58:16.537514   13836 start_flags.go:323] config:
	{Name:offline-docker-622300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-622300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:58:16.538516   13836 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:58:16.539524   13836 out.go:177] * Starting control plane node offline-docker-622300 in cluster offline-docker-622300
	I1212 23:58:16.540527   13836 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:58:16.540527   13836 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:58:16.540527   13836 cache.go:56] Caching tarball of preloaded images
	I1212 23:58:16.541527   13836 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:58:16.541527   13836 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:58:16.541527   13836 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\offline-docker-622300\config.json ...
	I1212 23:58:16.541527   13836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\offline-docker-622300\config.json: {Name:mk65723b7591cd9a861cc275844fb292cb6ab0ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:58:16.542524   13836 start.go:365] acquiring machines lock for offline-docker-622300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:02:37.785676   13836 start.go:369] acquired machines lock for "offline-docker-622300" in 4m21.2418823s
	I1213 00:02:37.785756   13836 start.go:93] Provisioning new machine with config: &{Name:offline-docker-622300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-622300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 00:02:37.786350   13836 start.go:125] createHost starting for "" (driver="hyperv")
	I1213 00:02:37.787469   13836 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 00:02:37.788048   13836 start.go:159] libmachine.API.Create for "offline-docker-622300" (driver="hyperv")
	I1213 00:02:37.788170   13836 client.go:168] LocalClient.Create starting
	I1213 00:02:37.788978   13836 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1213 00:02:37.789302   13836 main.go:141] libmachine: Decoding PEM data...
	I1213 00:02:37.789421   13836 main.go:141] libmachine: Parsing certificate...
	I1213 00:02:37.789635   13836 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1213 00:02:37.789635   13836 main.go:141] libmachine: Decoding PEM data...
	I1213 00:02:37.789635   13836 main.go:141] libmachine: Parsing certificate...
	I1213 00:02:37.789635   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1213 00:02:39.787791   13836 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1213 00:02:39.787881   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:39.787881   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1213 00:02:41.611707   13836 main.go:141] libmachine: [stdout =====>] : False
	
	I1213 00:02:41.611765   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:41.611765   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1213 00:02:43.203791   13836 main.go:141] libmachine: [stdout =====>] : True
	
	I1213 00:02:43.204011   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:43.204089   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1213 00:02:47.071197   13836 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1213 00:02:47.071282   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:47.073765   13836 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1213 00:02:47.528939   13836 main.go:141] libmachine: Creating SSH key...
	I1213 00:02:47.634954   13836 main.go:141] libmachine: Creating VM...
	I1213 00:02:47.634954   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1213 00:02:50.715106   13836 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1213 00:02:50.715493   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:50.715603   13836 main.go:141] libmachine: Using switch "Default Switch"
	I1213 00:02:50.715643   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1213 00:02:52.586786   13836 main.go:141] libmachine: [stdout =====>] : True
	
	I1213 00:02:52.586786   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:52.586786   13836 main.go:141] libmachine: Creating VHD
	I1213 00:02:52.586910   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\fixed.vhd' -SizeBytes 10MB -Fixed
	I1213 00:02:57.055138   13836 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\fixe
	                          d.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E39D111D-92B2-4707-B2E0-6B47FA6FFBFD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1213 00:02:57.055221   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:02:57.055302   13836 main.go:141] libmachine: Writing magic tar header
	I1213 00:02:57.055302   13836 main.go:141] libmachine: Writing SSH key tar header
	I1213 00:02:57.066750   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\disk.vhd' -VHDType Dynamic -DeleteSource
	I1213 00:03:00.464515   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:00.464515   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:00.464515   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\disk.vhd' -SizeBytes 20000MB
	I1213 00:03:03.141239   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:03.141239   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:03.141484   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-622300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I1213 00:03:07.631048   13836 main.go:141] libmachine: [stdout =====>] : 
	Name                  State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                  ----- ----------- ----------------- ------   ------             -------
	offline-docker-622300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1213 00:03:07.631048   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:07.631048   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-622300 -DynamicMemoryEnabled $false
	I1213 00:03:09.963021   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:09.963021   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:09.963021   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-622300 -Count 2
	I1213 00:03:12.396058   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:12.396058   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:12.396058   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-622300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\boot2docker.iso'
	I1213 00:03:15.037479   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:15.037479   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:15.037802   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-622300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\disk.vhd'
	I1213 00:03:17.840372   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:17.840431   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:17.840431   13836 main.go:141] libmachine: Starting VM...
	I1213 00:03:17.840431   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-622300
	I1213 00:03:20.989373   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:20.989373   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:20.989373   13836 main.go:141] libmachine: Waiting for host to start...
	I1213 00:03:20.989373   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:24.205459   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:24.205555   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:24.205755   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:03:27.439686   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:27.439686   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:28.449716   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:31.997971   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:31.998216   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:31.998331   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:03:35.060692   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:35.060871   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:36.063941   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:38.648626   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:38.648816   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:38.648945   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:03:41.446795   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:41.447102   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:42.459473   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:44.772048   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:44.772085   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:44.772085   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:03:47.455569   13836 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:03:47.455569   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:48.457150   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:50.889606   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:50.890184   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:50.890184   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:03:53.596318   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:03:53.596318   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:53.596318   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:55.796296   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:55.796356   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:55.796356   13836 machine.go:88] provisioning docker machine ...
	I1213 00:03:55.796356   13836 buildroot.go:166] provisioning hostname "offline-docker-622300"
	I1213 00:03:55.796356   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:03:58.037544   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:03:58.037654   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:03:58.037710   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:00.682394   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:00.682609   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:00.690030   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:00.700043   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:00.700043   13836 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-622300 && echo "offline-docker-622300" | sudo tee /etc/hostname
	I1213 00:04:00.868525   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-622300
	
	I1213 00:04:00.868653   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:03.075491   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:03.075756   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:03.075849   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:05.693369   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:05.693443   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:05.700725   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:05.701973   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:05.702110   13836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-622300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-622300/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-622300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:04:05.838288   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:04:05.838288   13836 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1213 00:04:05.838288   13836 buildroot.go:174] setting up certificates
	I1213 00:04:05.838288   13836 provision.go:83] configureAuth start
	I1213 00:04:05.838288   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:08.077191   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:08.077191   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:08.077294   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:10.789786   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:10.789786   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:10.789902   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:13.028269   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:13.028348   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:13.028419   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:15.773787   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:15.773787   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:15.773940   13836 provision.go:138] copyHostCerts
	I1213 00:04:15.774391   13836 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1213 00:04:15.774391   13836 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1213 00:04:15.775013   13836 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1213 00:04:15.775779   13836 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1213 00:04:15.775779   13836 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1213 00:04:15.776566   13836 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 00:04:15.777702   13836 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1213 00:04:15.777702   13836 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1213 00:04:15.778352   13836 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 00:04:15.779047   13836 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-622300 san=[172.30.61.211 172.30.61.211 localhost 127.0.0.1 minikube offline-docker-622300]
	I1213 00:04:16.310364   13836 provision.go:172] copyRemoteCerts
	I1213 00:04:16.326907   13836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:04:16.326907   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:18.571238   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:18.571499   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:18.571555   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:21.223049   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:21.223139   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:21.223139   13836 sshutil.go:53] new ssh client: &{IP:172.30.61.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\id_rsa Username:docker}
	I1213 00:04:21.333597   13836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0065868s)
	I1213 00:04:21.334076   13836 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 00:04:21.378044   13836 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1213 00:04:21.419924   13836 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:04:21.467688   13836 provision.go:86] duration metric: configureAuth took 15.6293304s
	I1213 00:04:21.467791   13836 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:04:21.468485   13836 config.go:182] Loaded profile config "offline-docker-622300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1213 00:04:21.468570   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:23.762844   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:23.762844   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:23.762981   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:26.366406   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:26.366468   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:26.372122   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:26.372314   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:26.372886   13836 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 00:04:26.513957   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 00:04:26.513957   13836 buildroot.go:70] root file system type: tmpfs
	I1213 00:04:26.513957   13836 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 00:04:26.513957   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:28.750787   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:28.750787   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:28.750787   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:31.387812   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:31.387812   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:31.392822   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:31.393815   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:31.393815   13836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 00:04:31.544566   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 00:04:31.544566   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:33.816799   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:33.816959   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:33.816959   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:36.417260   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:36.417260   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:36.422918   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:36.423448   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:36.423634   13836 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 00:04:37.438209   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 00:04:37.438209   13836 machine.go:91] provisioned docker machine in 41.6416657s
	I1213 00:04:37.438209   13836 client.go:171] LocalClient.Create took 1m59.6495004s
	I1213 00:04:37.438209   13836 start.go:167] duration metric: libmachine.API.Create for "offline-docker-622300" took 1m59.6496219s
	I1213 00:04:37.438209   13836 start.go:300] post-start starting for "offline-docker-622300" (driver="hyperv")
	I1213 00:04:37.438209   13836 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:04:37.453192   13836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:04:37.453192   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:39.746607   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:39.746955   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:39.746998   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:42.357470   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:42.357517   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:42.358137   13836 sshutil.go:53] new ssh client: &{IP:172.30.61.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\id_rsa Username:docker}
	I1213 00:04:42.467654   13836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0143102s)
	I1213 00:04:42.480001   13836 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:04:42.489245   13836 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:04:42.489307   13836 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1213 00:04:42.489848   13836 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1213 00:04:42.491150   13836 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1213 00:04:42.504535   13836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:04:42.520523   13836 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1213 00:04:42.560091   13836 start.go:303] post-start completed in 5.1218598s
	I1213 00:04:42.562576   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:44.863846   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:44.864145   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:44.864246   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:47.609786   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:47.609964   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:47.610179   13836 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\offline-docker-622300\config.json ...
	I1213 00:04:47.613012   13836 start.go:128] duration metric: createHost completed in 2m9.8260774s
	I1213 00:04:47.613103   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:49.845521   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:49.845584   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:49.845584   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:52.510661   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:52.510980   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:52.516912   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:52.517673   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:52.517771   13836 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 00:04:52.646559   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702425892.644935581
	
	I1213 00:04:52.646678   13836 fix.go:206] guest clock: 1702425892.644935581
	I1213 00:04:52.646678   13836 fix.go:219] Guest: 2023-12-13 00:04:52.644935581 +0000 UTC Remote: 2023-12-13 00:04:47.6131036 +0000 UTC m=+397.983189801 (delta=5.031831981s)
	I1213 00:04:52.646755   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:54.882797   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:54.883940   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:54.883940   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:04:57.386632   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:04:57.386632   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:57.393890   13836 main.go:141] libmachine: Using SSH client type: native
	I1213 00:04:57.394561   13836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.211 22 <nil> <nil>}
	I1213 00:04:57.394561   13836 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702425892
	I1213 00:04:57.534786   13836 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Dec 13 00:04:52 UTC 2023
	
	I1213 00:04:57.534873   13836 fix.go:226] clock set: Wed Dec 13 00:04:52 UTC 2023
	 (err=<nil>)
	I1213 00:04:57.534873   13836 start.go:83] releasing machines lock for "offline-docker-622300", held for 2m19.7484887s
	I1213 00:04:57.534998   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:04:59.821993   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:04:59.822183   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:04:59.822373   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:05:02.448223   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:05:02.448223   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:05:02.449282   13836 out.go:177] * Found network options:
	I1213 00:05:02.450360   13836 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W1213 00:05:02.451164   13836 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.30.61.211).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.30.61.211).
	I1213 00:05:02.451864   13836 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 00:05:02.452319   13836 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	I1213 00:05:02.459085   13836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:05:02.459085   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:05:02.481595   13836 ssh_runner.go:195] Run: cat /version.json
	I1213 00:05:02.481595   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-622300 ).state
	I1213 00:05:04.932111   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:05:04.932210   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:05:04.932210   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:05:04.963950   13836 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:05:04.964062   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:05:04.964062   13836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-622300 ).networkadapters[0]).ipaddresses[0]
	I1213 00:05:07.728842   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:05:07.728842   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:05:07.728842   13836 sshutil.go:53] new ssh client: &{IP:172.30.61.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\id_rsa Username:docker}
	I1213 00:05:07.826860   13836 main.go:141] libmachine: [stdout =====>] : 172.30.61.211
	
	I1213 00:05:07.826860   13836 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:05:07.827844   13836 sshutil.go:53] new ssh client: &{IP:172.30.61.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\offline-docker-622300\id_rsa Username:docker}
	I1213 00:05:07.905981   13836 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4468715s)
	I1213 00:05:07.944200   13836 ssh_runner.go:235] Completed: cat /version.json: (5.4625805s)
	I1213 00:05:07.962992   13836 ssh_runner.go:195] Run: systemctl --version
	I1213 00:05:07.987404   13836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:05:07.998705   13836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:05:08.017994   13836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:05:08.043593   13836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:05:08.043593   13836 start.go:475] detecting cgroup driver to use...
	I1213 00:05:08.044017   13836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:05:08.098930   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1213 00:05:08.131943   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 00:05:08.149289   13836 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 00:05:08.169399   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 00:05:08.207232   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 00:05:08.253194   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 00:05:08.299521   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 00:05:08.343310   13836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:05:08.389381   13836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 00:05:08.420691   13836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:05:08.458704   13836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:05:08.495237   13836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:05:08.683487   13836 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 00:05:08.718621   13836 start.go:475] detecting cgroup driver to use...
	I1213 00:05:08.744781   13836 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 00:05:08.795553   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:05:08.831325   13836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:05:08.872966   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:05:08.919138   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 00:05:08.966468   13836 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 00:05:09.027472   13836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 00:05:09.050476   13836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:05:09.100175   13836 ssh_runner.go:195] Run: which cri-dockerd
	I1213 00:05:09.118184   13836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 00:05:09.134110   13836 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1213 00:05:09.179387   13836 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 00:05:09.376466   13836 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 00:05:09.542067   13836 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 00:05:09.542067   13836 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 00:05:09.586248   13836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:05:09.770223   13836 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 00:06:10.895998   13836 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1255004s)
	I1213 00:06:10.914955   13836 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 00:06:10.945572   13836 out.go:177] 
	W1213 00:06:10.946390   13836 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Wed 2023-12-13 00:03:42 UTC, ends at Wed 2023-12-13 00:06:10 UTC. --
	Dec 13 00:04:36 offline-docker-622300 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.959868252Z" level=info msg="Starting up"
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.961013753Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.962590455Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:36.999960085Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.028239106Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.028287806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031239408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031452209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031723009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031853009Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031963309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032150109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032171309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032272109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032859310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032971010Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032990610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033291510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033397910Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033482910Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033573910Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048570222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048694522Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048721522Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048793822Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048915022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048942122Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048961622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049261422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049417122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049661722Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049914723Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050196623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050468423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050688223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050919423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051189923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051499424Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051833224Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051868524Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.052224824Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053163225Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053243325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053280025Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053325425Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053505125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053574625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053605025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053631025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053658025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053682625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053707525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053732825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053789325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053891926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053931226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053961026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053987126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054218826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054303726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054340026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054369026Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054460726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054505426Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054547026Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055385127Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055477927Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055577527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055641027Z" level=info msg="containerd successfully booted in 0.057127s"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.096923958Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.113321370Z" level=info msg="Loading containers: start."
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.348590148Z" level=info msg="Loading containers: done."
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371604365Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371633765Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371641965Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371648966Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371671766Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371776366Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.434576713Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.434756813Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:04:37 offline-docker-622300 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.791832659Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:05:09 offline-docker-622300 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793081559Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793416259Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793483259Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793522459Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: docker.service: Succeeded.
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:05:10 offline-docker-622300 dockerd[1011]: time="2023-12-13T00:05:10.884634859Z" level=info msg="Starting up"
	Dec 13 00:06:10 offline-docker-622300 dockerd[1011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Wed 2023-12-13 00:03:42 UTC, ends at Wed 2023-12-13 00:06:10 UTC. --
	Dec 13 00:04:36 offline-docker-622300 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.959868252Z" level=info msg="Starting up"
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.961013753Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:04:36 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:36.962590455Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:36.999960085Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.028239106Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.028287806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031239408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031452209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031723009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031853009Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.031963309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032150109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032171309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032272109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032859310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032971010Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.032990610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033291510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033397910Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033482910Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.033573910Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048570222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048694522Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048721522Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048793822Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048915022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048942122Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.048961622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049261422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049417122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049661722Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.049914723Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050196623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050468423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050688223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.050919423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051189923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051499424Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051833224Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.051868524Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.052224824Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053163225Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053243325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053280025Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053325425Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053505125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053574625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053605025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053631025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053658025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053682625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053707525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053732825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053789325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053891926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053931226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053961026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.053987126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054218826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054303726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054340026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054369026Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054460726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054505426Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.054547026Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055385127Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055477927Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055577527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:04:37 offline-docker-622300 dockerd[680]: time="2023-12-13T00:04:37.055641027Z" level=info msg="containerd successfully booted in 0.057127s"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.096923958Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.113321370Z" level=info msg="Loading containers: start."
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.348590148Z" level=info msg="Loading containers: done."
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371604365Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371633765Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371641965Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371648966Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371671766Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.371776366Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.434576713Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:04:37 offline-docker-622300 dockerd[674]: time="2023-12-13T00:04:37.434756813Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:04:37 offline-docker-622300 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.791832659Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:05:09 offline-docker-622300 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793081559Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793416259Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793483259Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:05:09 offline-docker-622300 dockerd[674]: time="2023-12-13T00:05:09.793522459Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: docker.service: Succeeded.
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:05:10 offline-docker-622300 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:05:10 offline-docker-622300 dockerd[1011]: time="2023-12-13T00:05:10.884634859Z" level=info msg="Starting up"
	Dec 13 00:06:10 offline-docker-622300 dockerd[1011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:06:10 offline-docker-622300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 00:06:10.946447   13836 out.go:239] * 
	* 
	W1213 00:06:10.948164   13836 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:06:10.949085   13836 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-622300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv failed: exit status 90
panic.go:523: *** TestOffline FAILED at 2023-12-13 00:06:11.2677929 +0000 UTC m=+7329.293505301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-622300 -n offline-docker-622300
E1213 00:06:22.652111   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-622300 -n offline-docker-622300: exit status 6 (13.5479069s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:06:11.407723    8984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1213 00:06:24.736567    8984 status.go:415] kubeconfig endpoint: extract IP: "offline-docker-622300" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "offline-docker-622300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "offline-docker-622300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-622300
E1213 00:06:25.462538   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-622300: (1m2.2154463s)
--- FAIL: TestOffline (557.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 25.6868ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gzmdd" [ce060e5a-4538-49fd-b48d-35ccd21eb735] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0387543s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s6pvn" [ebbb0e34-e907-46c7-bb28-c282ed55fb13] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0386813s
addons_test.go:339: (dbg) Run:  kubectl --context addons-310200 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-310200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-310200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.0735904s)
addons_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 ip
addons_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 ip: (2.8476352s)
addons_test.go:363: expected stderr to be -empty- but got: *"W1212 22:11:38.127373   14528 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-310200 ip"
2023/12/12 22:11:40 [DEBUG] GET http://172.30.52.75:5000
addons_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable registry --alsologtostderr -v=1: (16.6823837s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-310200 -n addons-310200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-310200 -n addons-310200: (13.3146375s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 logs -n 25: (9.2528519s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |                     |
	|         | -p download-only-524600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |                     |
	|         | -p download-only-524600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |                     |
	|         | -p download-only-524600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC | 12 Dec 23 22:04 UTC |
	| delete  | -p download-only-524600                                                                     | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC | 12 Dec 23 22:04 UTC |
	| delete  | -p download-only-524600                                                                     | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC | 12 Dec 23 22:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-613500 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |                     |
	|         | binary-mirror-613500                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:50993                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-613500                                                                     | binary-mirror-613500 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | addons-310200                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | addons-310200                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-310200 --wait=true                                                                | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-310200 addons                                                                        | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-310200 ssh cat                                                                       | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|         | /opt/local-path-provisioner/pvc-578e7ec9-4d8f-487f-8d65-81c325ac781a_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-310200 ip                                                                            | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	| addons  | addons-310200 addons disable                                                                | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-310200 addons disable                                                                | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-310200        | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:12 UTC |
	|         | addons-310200                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:05:02
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:05:02.081238   10056 out.go:296] Setting OutFile to fd 836 ...
	I1212 22:05:02.082327   10056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:05:02.082357   10056 out.go:309] Setting ErrFile to fd 856...
	I1212 22:05:02.082401   10056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:05:02.105522   10056 out.go:303] Setting JSON to false
	I1212 22:05:02.109246   10056 start.go:128] hostinfo: {"hostname":"minikube7","uptime":72299,"bootTime":1702346402,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:05:02.109246   10056 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:05:02.110241   10056 out.go:177] * [addons-310200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:05:02.112306   10056 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:05:02.112149   10056 notify.go:220] Checking for updates...
	I1212 22:05:02.112368   10056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:05:02.113603   10056 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:05:02.114206   10056 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:05:02.114868   10056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:05:02.116642   10056 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:05:07.542313   10056 out.go:177] * Using the hyperv driver based on user configuration
	I1212 22:05:07.543048   10056 start.go:298] selected driver: hyperv
	I1212 22:05:07.543142   10056 start.go:902] validating driver "hyperv" against <nil>
	I1212 22:05:07.543142   10056 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:05:07.594002   10056 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:05:07.595901   10056 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:05:07.596042   10056 cni.go:84] Creating CNI manager for ""
	I1212 22:05:07.596042   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:05:07.596042   10056 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:05:07.596042   10056 start_flags.go:323] config:
	{Name:addons-310200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-310200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:05:07.596603   10056 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:05:07.598118   10056 out.go:177] * Starting control plane node addons-310200 in cluster addons-310200
	I1212 22:05:07.598866   10056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 22:05:07.598866   10056 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 22:05:07.598866   10056 cache.go:56] Caching tarball of preloaded images
	I1212 22:05:07.599613   10056 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 22:05:07.599613   10056 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 22:05:07.600494   10056 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\config.json ...
	I1212 22:05:07.600777   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\config.json: {Name:mkf6702d293bdd1c5f33d5c4076c8e9b3ed1b939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:05:07.602271   10056 start.go:365] acquiring machines lock for addons-310200: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:05:07.602271   10056 start.go:369] acquired machines lock for "addons-310200" in 0s
	I1212 22:05:07.603033   10056 start.go:93] Provisioning new machine with config: &{Name:addons-310200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-310200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 22:05:07.603120   10056 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 22:05:07.604070   10056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1212 22:05:07.604227   10056 start.go:159] libmachine.API.Create for "addons-310200" (driver="hyperv")
	I1212 22:05:07.604227   10056 client.go:168] LocalClient.Create starting
	I1212 22:05:07.605053   10056 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 22:05:07.679141   10056 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 22:05:08.069029   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 22:05:10.198935   10056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 22:05:10.199026   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:10.199026   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 22:05:11.899824   10056 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 22:05:11.899824   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:11.899965   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 22:05:13.376095   10056 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 22:05:13.376225   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:13.376225   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 22:05:17.148675   10056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 22:05:17.148744   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:17.151085   10056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:05:17.632400   10056 main.go:141] libmachine: Creating SSH key...
	I1212 22:05:17.770744   10056 main.go:141] libmachine: Creating VM...
	I1212 22:05:17.770744   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 22:05:20.612587   10056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 22:05:20.612587   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:20.612846   10056 main.go:141] libmachine: Using switch "Default Switch"
	I1212 22:05:20.612943   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 22:05:22.385560   10056 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 22:05:22.385652   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:22.385707   10056 main.go:141] libmachine: Creating VHD
	I1212 22:05:22.385707   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 22:05:26.134546   10056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1C580ECA-906A-4E90-9417-040AADEB853B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 22:05:26.134546   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:26.134846   10056 main.go:141] libmachine: Writing magic tar header
	I1212 22:05:26.134953   10056 main.go:141] libmachine: Writing SSH key tar header
	I1212 22:05:26.143886   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 22:05:29.335786   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:29.335786   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:29.335877   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\disk.vhd' -SizeBytes 20000MB
	I1212 22:05:31.848868   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:31.848868   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:31.848978   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-310200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I1212 22:05:35.381882   10056 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-310200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 22:05:35.381882   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:35.381882   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-310200 -DynamicMemoryEnabled $false
	I1212 22:05:37.555272   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:37.555272   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:37.555272   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-310200 -Count 2
	I1212 22:05:39.662250   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:39.662250   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:39.662348   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-310200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\boot2docker.iso'
	I1212 22:05:42.195509   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:42.195701   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:42.195814   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-310200 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\disk.vhd'
	I1212 22:05:44.735481   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:44.735481   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:44.735668   10056 main.go:141] libmachine: Starting VM...
	I1212 22:05:44.735668   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-310200
	I1212 22:05:47.592159   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:47.592546   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:47.592546   10056 main.go:141] libmachine: Waiting for host to start...
	I1212 22:05:47.592546   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:05:49.826097   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:05:49.826323   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:49.826323   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:05:52.279543   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:52.279543   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:53.294573   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:05:55.447105   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:05:55.447135   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:55.447204   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:05:57.938371   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:05:57.938371   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:05:58.942868   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:01.086035   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:01.086097   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:01.086169   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:03.553745   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:06:03.553868   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:04.566390   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:06.776035   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:06.776035   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:06.776386   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:09.266267   10056 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:06:09.266338   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:10.268357   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:12.505581   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:12.505664   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:12.505664   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:15.016089   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:15.016089   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:15.016089   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:17.133253   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:17.133253   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:17.133345   10056 machine.go:88] provisioning docker machine ...
	I1212 22:06:17.133437   10056 buildroot.go:166] provisioning hostname "addons-310200"
	I1212 22:06:17.133675   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:19.265469   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:19.265616   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:19.265616   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:21.741021   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:21.741021   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:21.747117   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:06:21.757349   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:06:21.757349   10056 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-310200 && echo "addons-310200" | sudo tee /etc/hostname
	I1212 22:06:21.907204   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-310200
	
	I1212 22:06:21.907204   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:24.025439   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:24.025616   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:24.025751   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:26.543456   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:26.543456   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:26.548664   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:06:26.549402   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:06:26.549402   10056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-310200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-310200/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-310200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:06:26.689611   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:06:26.689611   10056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 22:06:26.689611   10056 buildroot.go:174] setting up certificates
	I1212 22:06:26.689611   10056 provision.go:83] configureAuth start
	I1212 22:06:26.689611   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:28.768477   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:28.768720   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:28.768720   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:31.259402   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:31.259402   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:31.259505   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:33.372165   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:33.372165   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:33.372165   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:35.879488   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:35.879931   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:35.879931   10056 provision.go:138] copyHostCerts
	I1212 22:06:35.880548   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 22:06:35.880736   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 22:06:35.883045   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 22:06:35.884011   10056 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-310200 san=[172.30.52.75 172.30.52.75 localhost 127.0.0.1 minikube addons-310200]
	I1212 22:06:36.213135   10056 provision.go:172] copyRemoteCerts
	I1212 22:06:36.228227   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:06:36.228227   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:38.312052   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:38.312052   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:38.312154   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:40.775565   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:40.775565   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:40.776089   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:06:40.883219   10056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6549712s)
	I1212 22:06:40.883669   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 22:06:40.924370   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:06:40.971347   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:06:41.008017   10056 provision.go:86] duration metric: configureAuth took 14.3183422s
	I1212 22:06:41.008017   10056 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:06:41.008784   10056 config.go:182] Loaded profile config "addons-310200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:06:41.008784   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:43.103924   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:43.104160   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:43.104263   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:45.552664   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:45.552664   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:45.559037   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:06:45.559915   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:06:45.559915   10056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 22:06:45.703158   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 22:06:45.703158   10056 buildroot.go:70] root file system type: tmpfs
	I1212 22:06:45.703725   10056 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 22:06:45.703952   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:47.833864   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:47.833864   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:47.833959   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:50.338323   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:50.338323   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:50.344442   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:06:50.345213   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:06:50.345213   10056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 22:06:50.490841   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 22:06:50.490841   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:52.586831   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:52.587119   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:52.587119   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:06:55.117911   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:06:55.117996   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:55.123856   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:06:55.124567   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:06:55.124567   10056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 22:06:56.093753   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 22:06:56.093753   10056 machine.go:91] provisioned docker machine in 38.9602326s
	I1212 22:06:56.093753   10056 client.go:171] LocalClient.Create took 1m48.4890378s
	I1212 22:06:56.093753   10056 start.go:167] duration metric: libmachine.API.Create for "addons-310200" took 1m48.4890378s
	I1212 22:06:56.093753   10056 start.go:300] post-start starting for "addons-310200" (driver="hyperv")
	I1212 22:06:56.094286   10056 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:06:56.108503   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:06:56.108503   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:06:58.273047   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:06:58.273193   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:06:58.273193   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:00.782483   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:00.782803   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:00.783640   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:07:00.894573   10056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7860491s)
	I1212 22:07:00.908377   10056 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:07:00.916020   10056 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:07:00.916132   10056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 22:07:00.916828   10056 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 22:07:00.917284   10056 start.go:303] post-start completed in 4.8235094s
	I1212 22:07:00.921171   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:03.057043   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:03.057221   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:03.057221   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:05.564596   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:05.564596   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:05.564932   10056 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\config.json ...
	I1212 22:07:05.567766   10056 start.go:128] duration metric: createHost completed in 1m57.964114s
	I1212 22:07:05.567926   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:07.690550   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:07.690550   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:07.690550   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:10.187546   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:10.187622   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:10.192780   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:07:10.193549   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:07:10.193609   10056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:07:10.333884   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702418830.325862727
	
	I1212 22:07:10.333884   10056 fix.go:206] guest clock: 1702418830.325862727
	I1212 22:07:10.334022   10056 fix.go:219] Guest: 2023-12-12 22:07:10.325862727 +0000 UTC Remote: 2023-12-12 22:07:05.5679261 +0000 UTC m=+123.662271201 (delta=4.757936627s)
	I1212 22:07:10.334100   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:12.425181   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:12.425181   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:12.425181   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:14.904312   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:14.904623   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:14.910006   10056 main.go:141] libmachine: Using SSH client type: native
	I1212 22:07:14.910724   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x984f40] 0x987a80 <nil>  [] 0s} 172.30.52.75 22 <nil> <nil>}
	I1212 22:07:14.910724   10056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702418830
	I1212 22:07:15.045247   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 22:07:10 UTC 2023
	
	I1212 22:07:15.045366   10056 fix.go:226] clock set: Tue Dec 12 22:07:10 UTC 2023
	 (err=<nil>)
	I1212 22:07:15.045366   10056 start.go:83] releasing machines lock for "addons-310200", held for 2m7.4419778s
	I1212 22:07:15.045790   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:17.142830   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:17.142830   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:17.142830   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:19.602103   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:19.602370   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:19.605961   10056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:07:19.605961   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:19.621950   10056 ssh_runner.go:195] Run: cat /version.json
	I1212 22:07:19.621950   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:21.784514   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:07:24.342275   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:24.342275   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:24.342338   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:07:24.366820   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:07:24.366922   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:07:24.367887   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:07:24.538746   10056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9327632s)
	I1212 22:07:24.538746   10056 ssh_runner.go:235] Completed: cat /version.json: (4.916774s)
	I1212 22:07:24.554306   10056 ssh_runner.go:195] Run: systemctl --version
	I1212 22:07:24.574080   10056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 22:07:24.581857   10056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:07:24.596626   10056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:07:24.621037   10056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:07:24.621037   10056 start.go:475] detecting cgroup driver to use...
	I1212 22:07:24.621037   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:07:24.662700   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 22:07:24.690397   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 22:07:24.706741   10056 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 22:07:24.718582   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 22:07:24.749400   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 22:07:24.778140   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 22:07:24.814219   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 22:07:24.844789   10056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:07:24.871896   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 22:07:24.899475   10056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:07:24.925498   10056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:07:24.953037   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:07:25.130737   10056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 22:07:25.157368   10056 start.go:475] detecting cgroup driver to use...
	I1212 22:07:25.173628   10056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 22:07:25.203623   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:07:25.233568   10056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:07:25.270247   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:07:25.301625   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 22:07:25.334371   10056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 22:07:25.383988   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 22:07:25.404860   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:07:25.445559   10056 ssh_runner.go:195] Run: which cri-dockerd
	I1212 22:07:25.463307   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 22:07:25.477829   10056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 22:07:25.516473   10056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 22:07:25.691467   10056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 22:07:25.841294   10056 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 22:07:25.841663   10056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 22:07:25.882559   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:07:26.036864   10056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 22:07:27.544226   10056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5073545s)
	I1212 22:07:27.557744   10056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 22:07:27.736217   10056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 22:07:27.899841   10056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 22:07:28.068114   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:07:28.245807   10056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 22:07:28.286034   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:07:28.461063   10056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 22:07:28.566666   10056 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 22:07:28.580466   10056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 22:07:28.588088   10056 start.go:543] Will wait 60s for crictl version
	I1212 22:07:28.602317   10056 ssh_runner.go:195] Run: which crictl
	I1212 22:07:28.621485   10056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:07:28.693704   10056 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 22:07:28.705973   10056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 22:07:28.750235   10056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 22:07:28.784130   10056 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 22:07:28.784454   10056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 22:07:28.789516   10056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 22:07:28.789516   10056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 22:07:28.789516   10056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 22:07:28.789516   10056 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 22:07:28.793031   10056 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 22:07:28.793031   10056 ip.go:210] interface addr: 172.30.48.1/20
	I1212 22:07:28.806204   10056 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 22:07:28.811286   10056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:07:28.830038   10056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 22:07:28.840456   10056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 22:07:28.861687   10056 docker.go:671] Got preloaded images: 
	I1212 22:07:28.861687   10056 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 22:07:28.877303   10056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 22:07:28.907822   10056 ssh_runner.go:195] Run: which lz4
	I1212 22:07:28.923954   10056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:07:28.930115   10056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:07:28.930383   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 22:07:32.131240   10056 docker.go:635] Took 3.218195 seconds to copy over tarball
	I1212 22:07:32.143571   10056 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:07:38.416339   10056 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.2727402s)
	I1212 22:07:38.416530   10056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:07:38.483753   10056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 22:07:38.500126   10056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 22:07:38.544439   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:07:38.718903   10056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 22:07:44.474147   10056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7552177s)
	I1212 22:07:44.484716   10056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 22:07:44.510907   10056 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 22:07:44.510907   10056 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:07:44.522004   10056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 22:07:44.558403   10056 cni.go:84] Creating CNI manager for ""
	I1212 22:07:44.558403   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:07:44.558403   10056 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:07:44.558403   10056 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.52.75 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-310200 NodeName:addons-310200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.52.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.52.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:07:44.558403   10056 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.52.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-310200"
	  kubeletExtraArgs:
	    node-ip: 172.30.52.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.52.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:07:44.559402   10056 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-310200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.52.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-310200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:07:44.573442   10056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:07:44.594485   10056 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:07:44.608869   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:07:44.625546   10056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 22:07:44.654226   10056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:07:44.684588   10056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1212 22:07:44.725924   10056 ssh_runner.go:195] Run: grep 172.30.52.75	control-plane.minikube.internal$ /etc/hosts
	I1212 22:07:44.731365   10056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.52.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:07:44.750966   10056 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200 for IP: 172.30.52.75
	I1212 22:07:44.751112   10056 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:44.751195   10056 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 22:07:44.944396   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I1212 22:07:44.944396   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:44.945399   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I1212 22:07:44.945399   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:44.946485   10056 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 22:07:45.061679   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I1212 22:07:45.061679   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.063591   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I1212 22:07:45.063591   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.064898   10056 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.key
	I1212 22:07:45.064898   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt with IP's: []
	I1212 22:07:45.172244   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt ...
	I1212 22:07:45.172244   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: {Name:mk5bf71b93db107297e4e12b697b9b4869297056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.173844   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.key ...
	I1212 22:07:45.173844   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.key: {Name:mk8819db8bd54aefc663a45819dd2d51f19dd79c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.174902   10056 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key.090fcbe2
	I1212 22:07:45.175459   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt.090fcbe2 with IP's: [172.30.52.75 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:07:45.379318   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt.090fcbe2 ...
	I1212 22:07:45.379318   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt.090fcbe2: {Name:mk90aaa6f5d8cf79f38be1b4c082638e090a222c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.381665   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key.090fcbe2 ...
	I1212 22:07:45.381665   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key.090fcbe2: {Name:mk800d63ff54f278a3e6b3846d2af53be298209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.382031   10056 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt.090fcbe2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt
	I1212 22:07:45.393712   10056 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key.090fcbe2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key
	I1212 22:07:45.394714   10056 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.key
	I1212 22:07:45.394714   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.crt with IP's: []
	I1212 22:07:45.531972   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.crt ...
	I1212 22:07:45.531972   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.crt: {Name:mkdf4c207eceb657db42a764430946d3b08781b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.534299   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.key ...
	I1212 22:07:45.534299   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.key: {Name:mk5391c914e29dc3e75d22febc17f994734c5bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:07:45.546628   10056 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 22:07:45.546628   10056 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 22:07:45.547402   10056 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 22:07:45.547873   10056 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 22:07:45.549165   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:07:45.597841   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:07:45.641133   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:07:45.685096   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:07:45.725700   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:07:45.767368   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:07:45.808790   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:07:45.853273   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:07:45.895950   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:07:45.938403   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:07:45.978513   10056 ssh_runner.go:195] Run: openssl version
	I1212 22:07:45.997654   10056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:07:46.023801   10056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:07:46.032988   10056 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:07:46.047081   10056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:07:46.067374   10056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:07:46.098555   10056 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:07:46.105513   10056 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:07:46.106110   10056 kubeadm.go:404] StartCluster: {Name:addons-310200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-310200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.52.75 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:07:46.116009   10056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 22:07:46.154744   10056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:07:46.189333   10056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:07:46.220009   10056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:07:46.238735   10056 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:07:46.238989   10056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 22:07:46.536296   10056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:08:00.983272   10056 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:08:00.983552   10056 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:08:00.983784   10056 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:08:00.984050   10056 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:08:00.984378   10056 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:08:00.984574   10056 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:08:00.985689   10056 out.go:204]   - Generating certificates and keys ...
	I1212 22:08:00.985933   10056 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:08:00.986004   10056 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:08:00.986220   10056 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:08:00.986369   10056 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:08:00.986525   10056 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:08:00.986606   10056 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:08:00.986852   10056 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:08:00.987182   10056 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-310200 localhost] and IPs [172.30.52.75 127.0.0.1 ::1]
	I1212 22:08:00.987340   10056 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:08:00.987585   10056 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-310200 localhost] and IPs [172.30.52.75 127.0.0.1 ::1]
	I1212 22:08:00.987792   10056 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:08:00.987792   10056 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:08:00.987792   10056 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:08:00.988401   10056 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:08:00.988513   10056 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:08:00.988513   10056 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:08:00.988513   10056 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:08:00.988513   10056 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:08:00.989123   10056 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:08:00.989287   10056 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:08:00.990016   10056 out.go:204]   - Booting up control plane ...
	I1212 22:08:00.990016   10056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:08:00.990723   10056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:08:00.990765   10056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:08:00.990765   10056 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:08:00.990765   10056 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:08:00.991348   10056 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:08:00.991348   10056 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:08:00.991348   10056 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503425 seconds
	I1212 22:08:00.991348   10056 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:08:00.991348   10056 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:08:00.992360   10056 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:08:00.992360   10056 kubeadm.go:322] [mark-control-plane] Marking the node addons-310200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:08:00.992360   10056 kubeadm.go:322] [bootstrap-token] Using token: 1yqi33.rc2o33ritbk0ddxj
	I1212 22:08:00.993345   10056 out.go:204]   - Configuring RBAC rules ...
	I1212 22:08:00.993345   10056 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:08:00.993345   10056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:08:00.993345   10056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:08:00.993345   10056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:08:00.994367   10056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:08:00.994367   10056 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:08:00.994367   10056 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:08:00.994367   10056 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:08:00.994367   10056 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:08:00.994367   10056 kubeadm.go:322] 
	I1212 22:08:00.994367   10056 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:08:00.994367   10056 kubeadm.go:322] 
	I1212 22:08:00.995368   10056 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:08:00.995368   10056 kubeadm.go:322] 
	I1212 22:08:00.995368   10056 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:08:00.995368   10056 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:08:00.995368   10056 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:08:00.995368   10056 kubeadm.go:322] 
	I1212 22:08:00.995368   10056 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:08:00.995368   10056 kubeadm.go:322] 
	I1212 22:08:00.995368   10056 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:08:00.995368   10056 kubeadm.go:322] 
	I1212 22:08:00.995368   10056 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:08:00.996362   10056 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:08:00.996362   10056 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:08:00.996362   10056 kubeadm.go:322] 
	I1212 22:08:00.996362   10056 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:08:00.996362   10056 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:08:00.996362   10056 kubeadm.go:322] 
	I1212 22:08:00.996362   10056 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1yqi33.rc2o33ritbk0ddxj \
	I1212 22:08:00.996362   10056 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 22:08:00.996362   10056 kubeadm.go:322] 	--control-plane 
	I1212 22:08:00.996362   10056 kubeadm.go:322] 
	I1212 22:08:00.997368   10056 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:08:00.997368   10056 kubeadm.go:322] 
	I1212 22:08:00.997368   10056 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1yqi33.rc2o33ritbk0ddxj \
	I1212 22:08:00.997368   10056 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 22:08:00.997368   10056 cni.go:84] Creating CNI manager for ""
	I1212 22:08:00.997368   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:08:00.998357   10056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 22:08:01.011373   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 22:08:01.030632   10056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 22:08:01.078796   10056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:08:01.095361   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=addons-310200 minikube.k8s.io/updated_at=2023_12_12T22_08_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:01.099412   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:01.151118   10056 ops.go:34] apiserver oom_adj: -16
	I1212 22:08:01.642351   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:01.801037   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:02.449486   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:02.953972   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:03.456880   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:03.951341   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:04.451449   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:04.950831   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:05.452588   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:05.955106   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:06.460790   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:06.948493   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:07.454240   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:07.954711   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:08.453931   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:08.951416   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:09.458500   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:09.945236   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:10.449674   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:10.952019   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:11.455353   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:11.945436   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:12.455351   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:12.947812   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:08:13.082637   10056 kubeadm.go:1088] duration metric: took 12.0037873s to wait for elevateKubeSystemPrivileges.
	I1212 22:08:13.082776   10056 kubeadm.go:406] StartCluster complete in 26.9765439s
	I1212 22:08:13.082899   10056 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:08:13.083034   10056 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:08:13.083803   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:08:13.084837   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:08:13.084837   10056 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 22:08:13.084837   10056 addons.go:69] Setting volumesnapshots=true in profile "addons-310200"
	I1212 22:08:13.084837   10056 addons.go:69] Setting cloud-spanner=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon volumesnapshots=true in "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:69] Setting default-storageclass=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:69] Setting inspektor-gadget=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon inspektor-gadget=true in "addons-310200"
	I1212 22:08:13.085820   10056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:69] Setting ingress=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:69] Setting helm-tiller=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon helm-tiller=true in "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon ingress=true in "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-310200"
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 addons.go:69] Setting metrics-server=true in profile "addons-310200"
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon metrics-server=true in "addons-310200"
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 config.go:182] Loaded profile config "addons-310200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:08:13.085820   10056 addons.go:69] Setting gcp-auth=true in profile "addons-310200"
	I1212 22:08:13.086812   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.086812   10056 mustload.go:65] Loading cluster: addons-310200
	I1212 22:08:13.085820   10056 addons.go:69] Setting storage-provisioner=true in profile "addons-310200"
	I1212 22:08:13.086812   10056 addons.go:231] Setting addon storage-provisioner=true in "addons-310200"
	I1212 22:08:13.086812   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 addons.go:69] Setting registry=true in profile "addons-310200"
	I1212 22:08:13.086812   10056 addons.go:231] Setting addon registry=true in "addons-310200"
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.086812   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.086812   10056 config.go:182] Loaded profile config "addons-310200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:08:13.085820   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 addons.go:231] Setting addon cloud-spanner=true in "addons-310200"
	I1212 22:08:13.087810   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-310200"
	I1212 22:08:13.087810   10056 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-310200"
	I1212 22:08:13.088821   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.085820   10056 addons.go:69] Setting ingress-dns=true in profile "addons-310200"
	I1212 22:08:13.088821   10056 addons.go:231] Setting addon ingress-dns=true in "addons-310200"
	I1212 22:08:13.088821   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.085820   10056 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-310200"
	I1212 22:08:13.088821   10056 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-310200"
	I1212 22:08:13.088821   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:13.090842   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.090842   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.091816   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.091816   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.092813   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.093817   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.093817   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.091816   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.098464   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.098666   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.098859   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.098960   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:13.999256   10056 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-310200" context rescaled to 1 replicas
	I1212 22:08:13.999256   10056 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.52.75 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 22:08:14.000338   10056 out.go:177] * Verifying Kubernetes components...
	I1212 22:08:14.051257   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:08:14.110347   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:08:19.195354   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.195354   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.198419   10056 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 22:08:19.200058   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.200058   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.200058   10056 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 22:08:19.200058   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 22:08:19.200460   10056 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 22:08:19.200058   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.202194   10056 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 22:08:19.202474   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 22:08:19.202474   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.203319   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.203460   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.204381   10056 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 22:08:19.207469   10056 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 22:08:19.207469   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 22:08:19.207469   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.267063   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.267063   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.269734   10056 addons.go:231] Setting addon default-storageclass=true in "addons-310200"
	I1212 22:08:19.269734   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:19.271732   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.272730   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.272730   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.275735   10056 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 22:08:19.281767   10056 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:08:19.281909   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 22:08:19.281909   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.389735   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.389735   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.396733   10056 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 22:08:19.406749   10056 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 22:08:19.411652   10056 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 22:08:19.411652   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 22:08:19.411652   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.437502   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.437685   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.442040   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 22:08:19.444115   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 22:08:19.444652   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 22:08:19.452850   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 22:08:19.465820   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 22:08:19.467822   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 22:08:19.469822   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 22:08:19.470814   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 22:08:19.470814   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 22:08:19.470814   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 22:08:19.471824   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.478825   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.478825   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.479824   10056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:08:19.480826   10056 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:08:19.480826   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:08:19.480826   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.495826   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.495826   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.509826   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 22:08:19.512830   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:08:19.514840   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:08:19.516821   10056 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:08:19.516821   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 22:08:19.516821   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.557633   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.557633   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.557633   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:19.612788   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.612788   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.631059   10056 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 22:08:19.646081   10056 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 22:08:19.646081   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 22:08:19.646081   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.882773   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.882773   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.883771   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 22:08:19.884770   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 22:08:19.884770   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 22:08:19.884770   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.894761   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.894761   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.912769   10056 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 22:08:19.915769   10056 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:08:19.915769   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 22:08:19.915769   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:19.972894   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:19.972894   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:19.975896   10056 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-310200"
	I1212 22:08:19.975896   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:19.977905   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:21.646765   10056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.5954743s)
	I1212 22:08:21.646765   10056 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 22:08:21.646765   10056 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (7.5363848s)
	I1212 22:08:21.649763   10056 node_ready.go:35] waiting up to 6m0s for node "addons-310200" to be "Ready" ...
	I1212 22:08:21.851129   10056 node_ready.go:49] node "addons-310200" has status "Ready":"True"
	I1212 22:08:21.851129   10056 node_ready.go:38] duration metric: took 201.3657ms waiting for node "addons-310200" to be "Ready" ...
	I1212 22:08:21.851129   10056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:08:22.310770   10056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-btflp" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:24.582961   10056 pod_ready.go:102] pod "coredns-5dd5756b68-btflp" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:24.647960   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.649033   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.649033   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:24.671641   10056 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:08:24.671641   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:08:24.671641   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:24.679653   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.679653   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.679653   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:24.692787   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:25.122672   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:25.122672   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:25.122672   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:25.147087   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:25.147388   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:25.147539   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:25.153930   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:25.153930   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:25.153930   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:25.173968   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:25.175526   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:25.175526   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:25.449094   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:25.449094   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:25.449094   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:26.401363   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 22:08:26.401363   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:26.809923   10056 pod_ready.go:102] pod "coredns-5dd5756b68-btflp" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:27.863225   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:27.863308   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:27.863308   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:29.141146   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:29.141146   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:29.165162   10056 out.go:177]   - Using image docker.io/busybox:stable
	I1212 22:08:29.174202   10056 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 22:08:29.430016   10056 pod_ready.go:102] pod "coredns-5dd5756b68-btflp" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:29.826020   10056 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:08:29.826020   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 22:08:29.826020   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:29.993766   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:29.993766   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:29.993766   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:30.596127   10056 pod_ready.go:92] pod "coredns-5dd5756b68-btflp" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:30.596127   10056 pod_ready.go:81] duration metric: took 8.2853198s waiting for pod "coredns-5dd5756b68-btflp" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:30.596127   10056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:31.212027   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.212027   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.213466   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.338548   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.338548   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.340407   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.491530   10056 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 22:08:31.491530   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 22:08:31.516585   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.516585   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.517771   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.574443   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.574443   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.575531   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.592593   10056 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:08:31.592593   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 22:08:31.633990   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.634061   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.634978   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.706034   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:08:31.713032   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:08:31.904415   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.904415   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.904415   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.952382   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.952382   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.952382   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:31.963034   10056 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 22:08:31.963144   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 22:08:31.997849   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:31.998108   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:31.998655   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:32.036397   10056 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 22:08:32.036507   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 22:08:32.054432   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:32.054537   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:32.055564   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:32.075774   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:32.075774   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:32.075774   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:32.121277   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 22:08:32.157476   10056 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:08:32.157476   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 22:08:32.290678   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:08:32.297687   10056 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 22:08:32.298686   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 22:08:32.331409   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:32.331409   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:32.332153   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:32.439896   10056 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 22:08:32.440889   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 22:08:32.607004   10056 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 22:08:32.607161   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 22:08:32.613845   10056 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 22:08:32.613845   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 22:08:32.651661   10056 pod_ready.go:102] pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:32.701986   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:08:32.702992   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:08:32.751016   10056 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 22:08:32.751016   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 22:08:32.769007   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:08:32.836256   10056 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 22:08:32.836256   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 22:08:32.993752   10056 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 22:08:32.993847   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 22:08:33.012467   10056 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 22:08:33.012467   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 22:08:33.123160   10056 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 22:08:33.123435   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 22:08:33.241639   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:33.241698   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:33.242858   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:33.332499   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 22:08:33.332590   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 22:08:33.364321   10056 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 22:08:33.364321   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 22:08:33.428032   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:33.428359   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:33.428420   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:33.468168   10056 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:08:33.468252   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 22:08:33.551597   10056 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:08:33.551597   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 22:08:33.625028   10056 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:08:33.625028   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 22:08:33.660113   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:08:33.773428   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:08:33.780414   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:08:33.807420   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 22:08:33.807420   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 22:08:33.899093   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.1930489s)
	I1212 22:08:33.916765   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:33.917008   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:33.918078   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:34.087621   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 22:08:34.087699   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 22:08:34.263477   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 22:08:34.263566   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 22:08:34.509814   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:08:34.595364   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 22:08:34.595505   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 22:08:34.656009   10056 pod_ready.go:102] pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:34.796561   10056 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 22:08:34.796664   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 22:08:34.945301   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:34.945301   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:34.945301   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:35.017507   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 22:08:35.017507   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 22:08:35.307223   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 22:08:35.307297   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 22:08:35.354338   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 22:08:35.354338   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 22:08:35.574780   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 22:08:35.574839   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 22:08:35.657035   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 22:08:35.725404   10056 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:08:35.725478   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 22:08:35.831899   10056 addons.go:231] Setting addon gcp-auth=true in "addons-310200"
	I1212 22:08:35.831973   10056 host.go:66] Checking if "addons-310200" exists ...
	I1212 22:08:35.833331   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:35.953789   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:08:36.144833   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.4317806s)
	I1212 22:08:36.254869   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:36.255060   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:36.255860   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:36.659838   10056 pod_ready.go:102] pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace has status "Ready":"False"
	I1212 22:08:37.179659   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.0582972s)
	I1212 22:08:37.397230   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:08:38.207371   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:38.207371   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:38.224371   10056 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 22:08:38.224371   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-310200 ).state
	I1212 22:08:38.661234   10056 pod_ready.go:97] pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.30.52.75 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-12 22:08:13 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-12-12 22:08:26 +0000 UTC,FinishedAt:2023-12-12 22:08:36 +0000 UTC,ContainerID:docker://c95d583b74386960ec0bdf02ce968101ca89e3a1963bf5a04b9f87d1a4c62e16,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://c95d583b74386960ec0bdf02ce968101ca89e3a1963bf5a04b9f87d1a4c62e16 Started:0xc0031dc3f0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:08:38.661303   10056 pod_ready.go:81] duration metric: took 8.0651393s waiting for pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace to be "Ready" ...
	E1212 22:08:38.661348   10056 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-cgh6t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:08:13 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.30.52.75 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-12 22:08:13 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-12-12 22:08:26 +0000 UTC,FinishedAt:2023-12-12 22:08:36 +0000 UTC,ContainerID:docker://c95d583b74386960ec0bdf02ce968101ca89e3a1963bf5a04b9f87d1a4c62e16,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://c95d583b74386960ec0bdf02ce968101ca89e3a1963bf5a04b9f87d1a4c62e16 Started:0xc0031dc3f0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:08:38.661348   10056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.674182   10056 pod_ready.go:92] pod "etcd-addons-310200" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:38.674182   10056 pod_ready.go:81] duration metric: took 12.834ms waiting for pod "etcd-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.674238   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.681855   10056 pod_ready.go:92] pod "kube-apiserver-addons-310200" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:38.681913   10056 pod_ready.go:81] duration metric: took 7.6756ms waiting for pod "kube-apiserver-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.681913   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.694120   10056 pod_ready.go:92] pod "kube-controller-manager-addons-310200" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:38.694170   10056 pod_ready.go:81] duration metric: took 12.257ms waiting for pod "kube-controller-manager-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.694222   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hhqts" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.704344   10056 pod_ready.go:92] pod "kube-proxy-hhqts" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:38.704344   10056 pod_ready.go:81] duration metric: took 10.1214ms waiting for pod "kube-proxy-hhqts" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:38.704398   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:39.058568   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.7668176s)
	I1212 22:08:39.058568   10056 addons.go:467] Verifying addon registry=true in "addons-310200"
	I1212 22:08:39.059838   10056 out.go:177] * Verifying registry addon...
	I1212 22:08:39.063246   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 22:08:39.077664   10056 pod_ready.go:92] pod "kube-scheduler-addons-310200" in "kube-system" namespace has status "Ready":"True"
	I1212 22:08:39.077664   10056 pod_ready.go:81] duration metric: took 373.2648ms waiting for pod "kube-scheduler-addons-310200" in "kube-system" namespace to be "Ready" ...
	I1212 22:08:39.077664   10056 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:08:39.077664   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:39.077664   10056 pod_ready.go:38] duration metric: took 17.2264577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:08:39.077829   10056 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:08:39.089134   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:39.092992   10056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:08:39.606736   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:40.100362   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:40.603056   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:40.724411   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:08:40.724663   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:40.724729   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-310200 ).networkadapters[0]).ipaddresses[0]
	I1212 22:08:41.110084   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:41.606268   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:41.926809   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.2237752s)
	I1212 22:08:41.926809   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.2237752s)
	I1212 22:08:42.105343   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:42.769371   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:43.132036   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:43.571983   10056 main.go:141] libmachine: [stdout =====>] : 172.30.52.75
	
	I1212 22:08:43.572245   10056 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:08:43.572806   10056 sshutil.go:53] new ssh client: &{IP:172.30.52.75 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-310200\id_rsa Username:docker}
	I1212 22:08:43.631267   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:44.101036   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:44.597150   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:45.101749   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:45.616407   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:46.109408   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:46.365613   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.5964783s)
	I1212 22:08:46.365684   10056 addons.go:467] Verifying addon ingress=true in "addons-310200"
	I1212 22:08:46.365684   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.7055144s)
	I1212 22:08:46.366447   10056 out.go:177] * Verifying ingress addon...
	I1212 22:08:46.366039   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.592441s)
	I1212 22:08:46.366154   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.5856827s)
	W1212 22:08:46.366447   10056 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:08:46.366447   10056 addons.go:467] Verifying addon metrics-server=true in "addons-310200"
	I1212 22:08:46.366154   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.8562861s)
	I1212 22:08:46.366447   10056 retry.go:31] will retry after 218.626832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:08:46.370455   10056 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 22:08:46.374864   10056 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 22:08:46.374864   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:46.380916   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:46.603672   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:08:46.610208   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:46.894855   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:47.101424   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:47.408931   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:47.621677   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:47.912927   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:48.109055   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:48.396752   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:48.602014   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:48.940654   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:49.118055   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:49.155074   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.2002344s)
	I1212 22:08:49.155074   10056 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-310200"
	I1212 22:08:49.155074   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.7577914s)
	I1212 22:08:49.155419   10056 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 22:08:49.155074   10056 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.0620367s)
	I1212 22:08:49.155419   10056 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.9306546s)
	I1212 22:08:49.156430   10056 api_server.go:72] duration metric: took 35.1570157s to wait for apiserver process to appear ...
	I1212 22:08:49.156430   10056 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:08:49.157423   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:08:49.156430   10056 api_server.go:253] Checking apiserver healthz at https://172.30.52.75:8443/healthz ...
	I1212 22:08:49.157423   10056 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 22:08:49.158426   10056 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 22:08:49.158426   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 22:08:49.158426   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 22:08:49.174886   10056 api_server.go:279] https://172.30.52.75:8443/healthz returned 200:
	ok
	I1212 22:08:49.177891   10056 api_server.go:141] control plane version: v1.28.4
	I1212 22:08:49.177891   10056 api_server.go:131] duration metric: took 21.4611ms to wait for apiserver health ...
	I1212 22:08:49.177891   10056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:08:49.178908   10056 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:08:49.178908   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:49.197966   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:49.199144   10056 system_pods.go:59] 18 kube-system pods found
	I1212 22:08:49.199265   10056 system_pods.go:61] "coredns-5dd5756b68-btflp" [3a83dce2-86bc-40f3-9421-3e3461e3bf8d] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "csi-hostpath-attacher-0" [6e2d7852-732a-430c-87af-88c14f4839d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:08:49.199265   10056 system_pods.go:61] "csi-hostpath-resizer-0" [871b9792-08af-4490-b553-39d233d142c0] Pending
	I1212 22:08:49.199265   10056 system_pods.go:61] "csi-hostpathplugin-fh24n" [eebb5d8a-773a-4622-becd-0e01ce0014b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:08:49.199265   10056 system_pods.go:61] "etcd-addons-310200" [c07d9a22-2f51-4e7b-9fa8-a0c23c2410ed] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "kube-apiserver-addons-310200" [95f31fef-128d-4cfb-8137-33d124a29aec] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "kube-controller-manager-addons-310200" [a726858e-ff71-41cd-83f9-78fa8b398de2] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "kube-ingress-dns-minikube" [91b8e1b7-d105-4367-86de-f532d2249d85] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 22:08:49.199265   10056 system_pods.go:61] "kube-proxy-hhqts" [1db37c84-aa76-4a1e-b5ae-b8748679af45] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "kube-scheduler-addons-310200" [71e79d1d-b07a-45c5-9265-bf744a09c839] Running
	I1212 22:08:49.199265   10056 system_pods.go:61] "metrics-server-7c66d45ddc-pl9km" [863bb503-8b23-4306-b284-ff91c5ee39e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:08:49.199265   10056 system_pods.go:61] "nvidia-device-plugin-daemonset-prm68" [4af535c0-f663-4ff2-ab82-4c2c85b58970] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:08:49.199265   10056 system_pods.go:61] "registry-gzmdd" [ce060e5a-4538-49fd-b48d-35ccd21eb735] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:08:49.199265   10056 system_pods.go:61] "registry-proxy-s6pvn" [ebbb0e34-e907-46c7-bb28-c282ed55fb13] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:08:49.199265   10056 system_pods.go:61] "snapshot-controller-58dbcc7b99-px9hj" [130f1d7f-7755-45cb-8952-d9ed92579c68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:08:49.199265   10056 system_pods.go:61] "snapshot-controller-58dbcc7b99-xv7b4" [2149ffa4-38ad-4a7b-b450-4235d8b34af2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:08:49.199265   10056 system_pods.go:61] "storage-provisioner" [9e1470b4-74a0-4fb7-b7cb-124fef2e1f3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 22:08:49.199265   10056 system_pods.go:61] "tiller-deploy-7b677967b9-gggqt" [46c817d4-ae34-4a1d-be4b-7b0de0f5ee40] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 22:08:49.199265   10056 system_pods.go:74] duration metric: took 21.3744ms to wait for pod list to return data ...
	I1212 22:08:49.199265   10056 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:08:49.201810   10056 default_sa.go:45] found service account: "default"
	I1212 22:08:49.201810   10056 default_sa.go:55] duration metric: took 2.5446ms for default service account to be created ...
	I1212 22:08:49.201810   10056 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:08:49.215597   10056 system_pods.go:86] 18 kube-system pods found
	I1212 22:08:49.215597   10056 system_pods.go:89] "coredns-5dd5756b68-btflp" [3a83dce2-86bc-40f3-9421-3e3461e3bf8d] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "csi-hostpath-attacher-0" [6e2d7852-732a-430c-87af-88c14f4839d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:08:49.215597   10056 system_pods.go:89] "csi-hostpath-resizer-0" [871b9792-08af-4490-b553-39d233d142c0] Pending
	I1212 22:08:49.215597   10056 system_pods.go:89] "csi-hostpathplugin-fh24n" [eebb5d8a-773a-4622-becd-0e01ce0014b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:08:49.215597   10056 system_pods.go:89] "etcd-addons-310200" [c07d9a22-2f51-4e7b-9fa8-a0c23c2410ed] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "kube-apiserver-addons-310200" [95f31fef-128d-4cfb-8137-33d124a29aec] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "kube-controller-manager-addons-310200" [a726858e-ff71-41cd-83f9-78fa8b398de2] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "kube-ingress-dns-minikube" [91b8e1b7-d105-4367-86de-f532d2249d85] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 22:08:49.215597   10056 system_pods.go:89] "kube-proxy-hhqts" [1db37c84-aa76-4a1e-b5ae-b8748679af45] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "kube-scheduler-addons-310200" [71e79d1d-b07a-45c5-9265-bf744a09c839] Running
	I1212 22:08:49.215597   10056 system_pods.go:89] "metrics-server-7c66d45ddc-pl9km" [863bb503-8b23-4306-b284-ff91c5ee39e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:08:49.215597   10056 system_pods.go:89] "nvidia-device-plugin-daemonset-prm68" [4af535c0-f663-4ff2-ab82-4c2c85b58970] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:08:49.215597   10056 system_pods.go:89] "registry-gzmdd" [ce060e5a-4538-49fd-b48d-35ccd21eb735] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:08:49.215597   10056 system_pods.go:89] "registry-proxy-s6pvn" [ebbb0e34-e907-46c7-bb28-c282ed55fb13] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:08:49.215597   10056 system_pods.go:89] "snapshot-controller-58dbcc7b99-px9hj" [130f1d7f-7755-45cb-8952-d9ed92579c68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:08:49.215597   10056 system_pods.go:89] "snapshot-controller-58dbcc7b99-xv7b4" [2149ffa4-38ad-4a7b-b450-4235d8b34af2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:08:49.215597   10056 system_pods.go:89] "storage-provisioner" [9e1470b4-74a0-4fb7-b7cb-124fef2e1f3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 22:08:49.215597   10056 system_pods.go:89] "tiller-deploy-7b677967b9-gggqt" [46c817d4-ae34-4a1d-be4b-7b0de0f5ee40] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 22:08:49.215597   10056 system_pods.go:126] duration metric: took 13.7867ms to wait for k8s-apps to be running ...
	I1212 22:08:49.215597   10056 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:08:49.228664   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:08:49.363768   10056 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 22:08:49.363844   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 22:08:49.389690   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:49.493659   10056 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:08:49.493659   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 22:08:49.611754   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:49.713694   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:08:49.720831   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:49.895678   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:50.115132   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:50.209019   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:50.400544   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:50.532683   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.928993s)
	I1212 22:08:50.532683   10056 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.3030748s)
	I1212 22:08:50.532683   10056 system_svc.go:56] duration metric: took 1.3170803s WaitForService to wait for kubelet.
	I1212 22:08:50.532683   10056 kubeadm.go:581] duration metric: took 36.5332628s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:08:50.532683   10056 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:08:50.536661   10056 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:08:50.536661   10056 node_conditions.go:123] node cpu capacity is 2
	I1212 22:08:50.536661   10056 node_conditions.go:105] duration metric: took 3.9779ms to run NodePressure ...
	I1212 22:08:50.536661   10056 start.go:228] waiting for startup goroutines ...
	I1212 22:08:50.602201   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:50.714719   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:50.902252   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:51.110378   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:51.221673   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:51.392825   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:51.600545   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:51.709253   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:51.895007   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:52.113297   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:52.205060   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.4913549s)
	I1212 22:08:52.212388   10056 addons.go:467] Verifying addon gcp-auth=true in "addons-310200"
	I1212 22:08:52.213700   10056 out.go:177] * Verifying gcp-auth addon...
	I1212 22:08:52.215311   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 22:08:52.219922   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:52.228221   10056 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 22:08:52.228221   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:52.240529   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:52.401642   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:52.602750   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:52.712666   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:52.755122   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:52.900142   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:53.113149   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:53.220501   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:53.245748   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:53.392894   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:53.599441   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:53.710523   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:53.754080   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:53.898038   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:54.104596   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:54.216753   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:54.249347   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:54.391103   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:54.611273   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:54.712098   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:54.751423   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:54.898076   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:55.102940   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:55.216289   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:55.260040   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:55.388958   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:55.609101   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:55.705199   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:55.749468   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:55.894815   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:56.100155   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:56.213452   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:56.256640   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:56.403803   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:56.608423   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:56.720890   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:56.747843   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:56.895132   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:57.098147   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:57.212280   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:57.257071   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:57.400802   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:57.608580   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:57.718093   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:57.745687   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:57.892087   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:58.098379   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:58.211605   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:58.253198   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:58.394837   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:58.602372   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:58.715135   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:58.758994   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:58.903277   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:59.111207   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:59.221656   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:59.250274   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:59.393183   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:08:59.599176   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:08:59.710182   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:08:59.754978   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:08:59.898704   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:00.106031   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:00.218657   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:00.261300   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:00.390273   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:00.608255   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:00.718164   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:00.747429   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:00.891892   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:01.097250   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:01.214045   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:01.258234   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:01.395045   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:01.603326   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:01.715089   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:01.756987   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:01.901406   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:02.108666   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:02.216795   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:02.245411   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:02.393924   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:02.668414   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:02.721636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:02.750512   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:02.890940   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:03.122273   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:03.272989   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:03.276239   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:03.393601   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:03.609602   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:03.722998   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:03.750089   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:03.891512   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:04.099771   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:04.209578   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:04.254819   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:04.444734   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:04.607004   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:04.719918   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:04.747766   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:04.892226   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:05.101506   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:05.212651   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:05.257129   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:05.402745   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:05.609006   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:05.719293   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:05.746853   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:05.891418   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:06.097088   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:06.209687   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:06.253232   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:06.395365   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:06.603045   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:06.714211   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:06.756339   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:06.899842   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:07.106669   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:07.215428   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:07.260007   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:07.401518   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:07.602132   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:07.711573   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:07.754136   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:07.904815   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:08.104307   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:08.217573   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:08.257085   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:08.400289   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:08.610443   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:08.726146   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:08.749992   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:08.899317   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:09.100260   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:09.208471   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:09.253114   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:09.396679   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:09.603529   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:09.717280   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:09.746179   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:09.890745   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:10.095881   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:10.207355   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:10.250026   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:10.392322   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:10.600056   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:10.709126   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:10.759336   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:10.897439   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:11.103509   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:11.213490   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:11.256708   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:11.402886   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:11.608061   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:11.720849   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:11.747675   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:11.892381   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:12.099278   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:12.210138   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:12.252464   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:12.396653   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:12.609113   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:12.720756   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:12.915370   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:12.917701   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:13.113378   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:13.271712   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:13.272289   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:13.415703   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:13.612458   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:13.708872   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:13.751971   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:13.894523   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:14.105475   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:14.216988   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:14.258119   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:14.409347   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:14.608744   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:14.720016   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:14.749636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:14.894631   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:15.100429   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:15.210083   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:15.257679   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:15.398509   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:15.604636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:15.718443   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:15.744932   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:15.889707   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:16.109060   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:16.220091   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:16.247994   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:16.393572   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:16.602591   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:16.712638   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:16.757397   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:16.889532   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:17.108932   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:17.219448   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:17.250244   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:17.389820   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:17.598208   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:17.707051   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:17.751059   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:17.896439   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:18.110646   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:18.214034   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:18.257481   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:18.400230   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:18.608967   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:18.717861   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:18.745697   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:18.893583   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:19.097309   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:19.209417   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:19.252258   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:19.395174   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:19.606948   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:19.712078   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:19.754709   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:19.895076   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:20.101902   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:20.216151   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:20.258714   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:20.401801   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:20.607450   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:20.717907   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:20.746956   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:20.888963   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:21.095891   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:21.208639   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:21.252501   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:21.393698   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:21.601708   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:21.713693   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:21.757729   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:21.898050   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:22.105116   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:22.217640   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:22.246102   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:22.389877   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:22.609113   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:22.721403   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:22.748373   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:22.893572   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:23.111934   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:23.218565   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:23.247144   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:23.390847   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:23.609534   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:23.720817   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:23.748157   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:23.891171   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:24.114137   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:24.221449   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:24.249109   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:24.392273   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:24.610844   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:24.722665   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:24.749455   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:24.893465   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:25.103229   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:25.212611   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:25.257629   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:25.397733   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:25.609304   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:25.717629   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:25.747051   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:25.889192   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:26.110629   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:26.219695   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:26.247719   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:26.393980   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:26.600099   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:26.709076   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:26.752113   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:26.896212   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:27.103986   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:27.212802   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:27.258975   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:27.402215   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:27.607028   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:27.719027   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:27.745789   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:27.890739   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:28.098961   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:28.212689   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:28.255750   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:28.400876   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:28.607962   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:28.718324   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:28.745910   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:28.892764   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:29.098398   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:29.270112   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:29.271108   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:29.393617   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:29.603604   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:29.712053   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:29.756314   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:29.901082   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:30.108333   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:30.220225   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:30.252616   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:30.391836   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:30.597471   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:30.710856   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:30.753475   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:30.896770   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:31.104072   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:31.213899   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:31.257318   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:31.403939   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:31.611561   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:31.722019   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:31.749703   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:31.891715   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:32.101499   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:32.213683   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:32.258780   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:32.400021   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:32.607421   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:32.719568   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:32.760575   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:32.903523   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:33.110734   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:33.221212   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:33.248371   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:33.392510   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:33.626267   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:33.720894   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:33.748374   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:33.890185   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:34.109470   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:34.568412   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:34.677228   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:34.677718   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:34.681003   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:34.764424   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:34.767932   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:34.891400   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:35.099372   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:35.232200   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:35.253687   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:35.397491   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:35.605098   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:35.714213   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:35.757817   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:35.904390   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:36.112600   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:36.223021   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:36.249383   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:36.394770   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:36.601733   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:36.714300   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:36.759409   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:36.926004   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:37.104938   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:37.215130   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:37.258450   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:37.402058   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:37.608221   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:37.724600   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:37.746548   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:37.930268   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:38.099892   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:38.214022   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:38.253636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:38.398514   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:38.601463   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:38.713165   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:38.757121   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:38.900889   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:39.106449   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:39.220350   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:39.245179   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:39.390872   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:39.610831   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:39.706838   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:39.749997   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:39.893542   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:40.098435   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:40.210765   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:40.255224   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:40.401138   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:40.610065   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:40.715779   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:40.761937   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:40.889083   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:41.098613   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:41.207337   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:41.251663   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:41.394622   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:41.602646   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:41.712468   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:41.756624   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:41.903053   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:42.108409   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:42.217104   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:42.246064   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:42.406367   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:42.610560   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:42.706828   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:42.749856   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:42.895667   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:43.099389   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:43.213764   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:43.255133   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:43.400261   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:43.605694   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:43.718010   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:43.746552   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:43.891651   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:44.116308   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:44.207401   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:44.252490   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:44.396129   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:44.599939   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:44.709733   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:44.752903   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:44.898069   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:45.105562   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:45.215981   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:45.260846   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:45.388000   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:45.610140   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:45.720394   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:45.753052   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:45.894609   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:46.102691   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:46.214026   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:46.258091   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:46.404391   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:46.608690   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:46.719545   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:46.747994   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:46.893103   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:47.099789   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:47.211682   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:47.251870   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:47.393003   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:47.612959   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:47.721075   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:47.748837   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:47.891901   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:48.113302   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:48.222804   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:48.247332   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:48.404947   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:48.610376   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:48.720429   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:48.747923   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:48.894676   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:49.113152   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:09:49.212026   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:49.257604   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:49.400992   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:49.607069   10056 kapi.go:107] duration metric: took 1m10.5435049s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 22:09:49.725544   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:49.745788   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:49.888576   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:50.208553   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:50.253641   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:50.397599   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:50.718764   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:50.747443   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:50.893370   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:51.210901   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:51.254143   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:51.396373   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:51.717580   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:51.760760   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:51.901822   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:52.215900   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:52.260318   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:52.389529   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:52.711266   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:52.753741   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:52.896267   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:53.213043   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:53.256195   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:53.400098   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:53.719115   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:53.748181   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:53.893824   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:54.212173   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:54.259593   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:54.400938   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:54.720881   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:54.746854   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:54.893285   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:55.213866   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:55.258452   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:55.402521   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:55.722181   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:55.749960   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:55.891664   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:56.208164   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:56.252623   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:56.398214   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:56.714405   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:56.756003   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:56.899745   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:57.223235   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:57.258552   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:57.403695   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:57.708175   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:57.751464   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:57.896311   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:58.215104   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:58.258492   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:58.406416   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:58.720547   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:58.752379   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:58.892873   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:59.217392   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:59.261452   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:59.405363   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:09:59.719643   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:09:59.748709   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:09:59.892837   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:00.212450   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:00.254414   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:00.399012   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:00.718583   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:00.745285   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:00.889641   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:01.208407   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:01.251222   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:01.396437   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:01.717247   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:01.757644   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:01.902056   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:02.219097   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:02.245878   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:02.390795   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:02.722121   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:02.747880   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:03.016994   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:03.222534   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:03.248777   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:03.392861   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:03.728252   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:03.754734   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:03.895385   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:04.211079   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:04.255094   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:04.398987   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:04.716816   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:04.761830   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:04.889802   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:05.211547   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:05.257095   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:05.401770   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:05.719718   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:05.746375   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:05.890794   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:06.209818   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:06.255122   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:06.399559   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:06.718150   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:06.758994   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:06.900662   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:07.368421   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:07.370883   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:07.402342   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:07.771467   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:07.773970   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:07.890243   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:08.220336   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:08.248451   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:08.393896   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:08.715074   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:08.753756   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:08.901875   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:09.220372   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:09.246143   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:09.392504   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:09.712059   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:09.753627   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:09.894007   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:10.213981   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:10.258341   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:10.397890   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:10.716304   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:10.760223   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:10.902230   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:11.215854   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:11.260124   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:11.401952   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:11.714424   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:11.757622   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:11.898095   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:12.209634   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:12.253012   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:12.394630   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:12.709279   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:12.754754   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:12.902596   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:13.220414   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:13.246943   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:13.391649   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:13.711651   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:13.755312   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:13.899855   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:14.219770   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:14.249113   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:14.391173   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:14.709200   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:14.749621   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:14.894093   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:15.215074   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:15.258047   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:15.400737   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:15.720963   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:15.748239   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:15.888748   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:16.223530   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:16.249556   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:16.392182   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:16.720387   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:16.752359   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:16.896088   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:17.217952   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:17.261604   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:17.388861   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:17.725219   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:17.748878   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:17.893982   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:18.217153   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:18.258700   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:18.400364   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:18.722772   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:18.751435   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:18.894147   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:19.209149   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:19.254084   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:19.394189   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:19.711130   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:19.756230   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:19.902736   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:20.234664   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:20.254329   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:20.397166   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:20.714069   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:20.757850   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:20.900330   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:21.219172   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:21.248753   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:21.476077   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:21.711193   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:21.755921   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:21.903068   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:22.218957   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:22.261512   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:22.404220   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:22.721496   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:22.749774   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:22.894936   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:23.210034   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:23.256159   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:23.400495   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:23.717965   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:23.761831   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:23.907705   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:24.226577   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:24.367341   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:24.395320   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:24.708167   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:24.751457   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:24.894666   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:25.215416   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:25.262240   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:25.404537   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:25.708909   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:25.753966   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:25.896741   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:26.213288   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:26.257600   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:26.403051   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:26.720891   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:26.764265   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:26.888711   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:27.207786   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:27.250356   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:27.396484   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:27.714333   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:27.759536   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:27.903885   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:28.210583   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:28.251807   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:28.401342   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:28.717333   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:28.746380   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:28.891201   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:29.209452   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:29.253246   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:29.398324   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:29.711858   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:29.757470   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:29.902443   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:30.208372   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:30.251547   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:30.394231   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:30.715027   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:30.760525   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:30.905000   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:31.208332   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:31.249623   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:31.427817   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:31.711591   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:31.757790   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:31.912980   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:32.218187   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:32.261151   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:32.405007   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:32.716528   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:32.759533   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:32.889627   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:33.221340   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:33.249632   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:33.394903   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:33.715340   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:33.759384   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:33.902442   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:34.217204   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:34.262341   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:34.400657   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:34.720048   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:34.747233   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:34.890229   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:35.209904   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:35.253991   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:35.397377   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:35.714550   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:35.761778   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:35.900701   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:36.220173   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:36.249436   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:36.393284   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:36.722702   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:36.749635   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:36.890417   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:37.206763   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:37.251278   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:37.395033   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:37.707823   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:37.756063   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:37.902179   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:38.212212   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:38.259215   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:38.405279   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:38.720662   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:38.750643   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:38.892220   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:39.208863   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:39.255326   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:39.400328   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:39.725938   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:39.749963   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:39.894742   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:40.213536   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:10:40.259101   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:40.401506   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:40.712270   10056 kapi.go:107] duration metric: took 1m51.5533418s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 22:10:40.757298   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:40.899286   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:41.258334   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:41.401757   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:41.755985   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:41.899970   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:42.253460   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:42.395521   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:42.751986   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:42.890045   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:43.250043   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:43.390552   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:43.751737   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:43.891392   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:44.252241   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:44.390673   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:44.748204   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:44.889494   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:45.250021   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:45.391704   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:45.754334   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:45.900944   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:46.248181   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:46.390595   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:46.755734   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:46.896054   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:47.255221   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:47.396518   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:47.755373   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:47.896040   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:48.252748   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:48.393911   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:48.753201   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:48.892171   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:49.256130   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:49.400952   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:49.760883   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:49.903263   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:50.260406   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:50.401542   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:50.758712   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:50.899291   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:51.255355   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:51.397140   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:51.759869   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:51.901237   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:52.256170   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:52.398176   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:52.756342   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:52.897847   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:53.258163   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:53.395621   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:53.754384   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:53.896479   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:54.256751   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:54.398240   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:54.746731   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:54.889753   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:55.251172   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:55.391810   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:55.751631   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:55.893454   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:56.253928   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:56.395456   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:56.758049   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:56.895450   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:57.254664   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:57.395025   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:57.756020   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:57.896896   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:58.258396   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:58.396060   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:58.753642   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:58.894739   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:59.255082   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:59.395518   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:10:59.755803   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:10:59.900144   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:00.261820   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:00.402954   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:00.762340   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:00.901541   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:01.250479   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:01.393325   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:01.752178   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:01.896319   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:02.253866   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:02.395746   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:02.754653   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:02.899939   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:03.255073   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:03.397401   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:03.759586   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:03.899851   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:04.260449   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:04.401280   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:04.749980   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:04.894763   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:05.259883   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:05.403081   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:05.748328   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:05.890987   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:06.248823   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:06.393998   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:06.758733   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:06.903745   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:07.254710   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:07.395721   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:07.761741   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:07.904232   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:08.252235   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:08.404779   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:08.760973   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:08.889478   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:09.257730   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:09.397478   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:09.746525   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:09.892024   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:10.255639   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:10.400712   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:10.748388   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:10.892056   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:11.256987   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:11.398025   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:11.761736   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:11.903126   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:12.253923   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:12.393145   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:12.757233   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:12.901821   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:13.251792   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:13.395090   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:13.759320   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:13.902613   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:14.252007   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:14.394339   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:14.755937   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:14.901392   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:15.250778   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:15.420655   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:15.755647   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:15.899356   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:16.259869   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:16.388318   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:16.750270   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:16.892790   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:17.269689   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:17.403730   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:17.925273   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:17.926031   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:18.253361   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:18.392693   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:18.751063   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:18.894252   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:19.253397   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:19.398051   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:19.763841   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:19.896696   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:11:20.252890   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:20.398720   10056 kapi.go:107] duration metric: took 2m34.0276304s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 22:11:20.746540   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:21.254397   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:21.750217   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:11:22.262432   10056 kapi.go:107] duration metric: took 2m30.0464466s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 22:11:22.263377   10056 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-310200 cluster.
	I1212 22:11:22.263800   10056 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 22:11:22.264433   10056 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 22:11:22.266041   10056 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1212 22:11:22.266770   10056 addons.go:502] enable addons completed in 3m9.1810818s: enabled=[nvidia-device-plugin helm-tiller cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1212 22:11:22.266947   10056 start.go:233] waiting for cluster config update ...
	I1212 22:11:22.266947   10056 start.go:242] writing updated cluster config ...
	I1212 22:11:22.281779   10056 ssh_runner.go:195] Run: rm -f paused
	I1212 22:11:22.533303   10056 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:11:22.534424   10056 out.go:177] * Done! kubectl is now configured to use "addons-310200" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 22:06:05 UTC, ends at Tue 2023-12-12 22:12:19 UTC. --
	Dec 12 22:12:05 addons-310200 dockerd[1329]: time="2023-12-12T22:12:05.208644119Z" level=warning msg="cleaning up after shim disconnected" id=0d728dd6fb08845305eb77351740ddb44b05752cacc714c8fec9c225deb15f70 namespace=moby
	Dec 12 22:12:05 addons-310200 dockerd[1329]: time="2023-12-12T22:12:05.208760935Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1323]: time="2023-12-12T22:12:09.324952281Z" level=info msg="ignoring event" container=f3f29b0b2acefa5b84cbd7bebd999665add878e4adcebd10c584c7bf783c397f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.325905603Z" level=info msg="shim disconnected" id=f3f29b0b2acefa5b84cbd7bebd999665add878e4adcebd10c584c7bf783c397f namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.325995914Z" level=warning msg="cleaning up after shim disconnected" id=f3f29b0b2acefa5b84cbd7bebd999665add878e4adcebd10c584c7bf783c397f namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.326010016Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1323]: time="2023-12-12T22:12:09.468186841Z" level=info msg="ignoring event" container=a5b22452496bf3d169c9d498dbfb3b8119e725757c245d0bf6625d60d11a5b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.470245703Z" level=info msg="shim disconnected" id=a5b22452496bf3d169c9d498dbfb3b8119e725757c245d0bf6625d60d11a5b46 namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.470984297Z" level=warning msg="cleaning up after shim disconnected" id=a5b22452496bf3d169c9d498dbfb3b8119e725757c245d0bf6625d60d11a5b46 namespace=moby
	Dec 12 22:12:09 addons-310200 dockerd[1329]: time="2023-12-12T22:12:09.471079609Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 22:12:12 addons-310200 dockerd[1329]: time="2023-12-12T22:12:12.352646482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:12:12 addons-310200 dockerd[1329]: time="2023-12-12T22:12:12.352903313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:12 addons-310200 dockerd[1329]: time="2023-12-12T22:12:12.353023627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:12:12 addons-310200 dockerd[1329]: time="2023-12-12T22:12:12.353044130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:12 addons-310200 cri-dockerd[1218]: time="2023-12-12T22:12:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6eb0bf11b6a233a3088012d370044f65c6676ffd2ffc9e2e8006e09ac53ad700/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 22:12:17 addons-310200 cri-dockerd[1218]: time="2023-12-12T22:12:17Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.105124373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.105272489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.105453009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.105518916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.245635771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.246098422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.246620780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:12:18 addons-310200 dockerd[1329]: time="2023-12-12T22:12:18.246893710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:12:18 addons-310200 cri-dockerd[1218]: time="2023-12-12T22:12:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1f1e9f752343ccb9e307c313970155cd5eb2b8f394260b2c16a8530a3fcb7c11/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	4c21238c0a5a0       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                                                2 seconds ago        Running             nginx                                    0                   6eb0bf11b6a23       nginx
	655bb08204106       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 58 seconds ago       Running             gcp-auth                                 0                   b13eb2af3ecac       gcp-auth-d4c87556c-j7l7m
	80ed990034676       registry.k8s.io/ingress-nginx/controller@sha256:5b161f051d017e55d358435f295f5e9a297e66158f136321d9b04520ec6c48a3                             About a minute ago   Running             controller                               0                   bfbec5f1f4b22       ingress-nginx-controller-7c6974c4d8-ltlr9
	5b5363bb8a6ae       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	8e5b6074b1cd2       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	b431096bbd96e       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	6c0fd5f9d9b84       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	e91faf5b2e680       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	4e9cdba5f229e       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   af8aac65d33d2       csi-hostpath-resizer-0
	6e1415e5bcfa1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   c412f4a518bf2       csi-hostpath-attacher-0
	a4635b8398c91       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   fe45ee532b1ba       csi-hostpathplugin-fh24n
	641e5dedbc739       1ebff0f9671bc                                                                                                                                2 minutes ago        Exited              patch                                    1                   a793083ef6fac       ingress-nginx-admission-patch-mcqfn
	1ec6fe000ef48       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   2 minutes ago        Exited              create                                   0                   ed5dbda9ae595       ingress-nginx-admission-create-whlkv
	97fa41a5398e4       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   a310d79d4f5d8       local-path-provisioner-78b46b4d5c-s8c4n
	92c2104c63812       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   589b2242669e1       snapshot-controller-58dbcc7b99-px9hj
	de7a012314376       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   70b9328343dfc       snapshot-controller-58dbcc7b99-xv7b4
	ee001f462ec29       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   76ded6f144c91       kube-ingress-dns-minikube
	a516c3b7c7733       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece                               3 minutes ago        Running             cloud-spanner-emulator                   0                   8676722fd6283       cloud-spanner-emulator-5649c69bf6-h6h7r
	d7d8db6d3af51       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   a4ec25d07498d       tiller-deploy-7b677967b9-gggqt
	b060c1679f174       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   5742909a2bb97       nvidia-device-plugin-daemonset-prm68
	9ddfa01905270       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   01dd561ace55b       storage-provisioner
	77034dbccca8a       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   1f89b626df299       coredns-5dd5756b68-btflp
	4cf2c2f848ea2       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   917de8bccda40       kube-proxy-hhqts
	f1cf04cc2113c       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   6004e0675fc66       kube-scheduler-addons-310200
	5b2b6a761e899       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   0ad8e3fefc957       etcd-addons-310200
	6cd85fa7474d6       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   8b262620bf382       kube-apiserver-addons-310200
	0903615daf3cd       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   79a1214c7ebc8       kube-controller-manager-addons-310200
	
	* 
	* ==> controller_ingress [80ed99003467] <==
	* I1212 22:11:19.672877       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"e7af7ca4-1f86-450f-8c01-407ad4a1bca9", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1212 22:11:19.672917       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"423c7ea5-5a56-4220-8b05-cde43443375b", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1212 22:11:20.839164       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1212 22:11:20.839248       7 nginx.go:303] "Starting NGINX process"
	I1212 22:11:20.841293       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1212 22:11:20.841678       7 controller.go:190] "Configuration changes detected, backend reload required"
	I1212 22:11:20.853374       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1212 22:11:20.853791       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-7c6974c4d8-ltlr9"
	I1212 22:11:20.865085       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7c6974c4d8-ltlr9" node="addons-310200"
	I1212 22:11:20.995336       7 controller.go:210] "Backend successfully reloaded"
	I1212 22:11:20.995528       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I1212 22:11:20.996011       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7c6974c4d8-ltlr9", UID:"b021eb0d-e2ce-4b73-a55c-a6a2d91eeae1", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1212 22:12:11.335411       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1212 22:12:11.406312       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.071s renderingIngressLength:1 renderingIngressTime:0s admissionTime:18.0kBs testedConfigurationSize:0.071}
	I1212 22:12:11.406450       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1212 22:12:11.418255       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I1212 22:12:11.418873       7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"9fb601fe-4cfc-4e65-912e-ee8ed3624b3b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1212 22:12:11.419399       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1212 22:12:11.419706       7 controller.go:190] "Configuration changes detected, backend reload required"
	I1212 22:12:11.566931       7 controller.go:210] "Backend successfully reloaded"
	I1212 22:12:11.567910       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7c6974c4d8-ltlr9", UID:"b021eb0d-e2ce-4b73-a55c-a6a2d91eeae1", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1212 22:12:14.754683       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I1212 22:12:14.754849       7 controller.go:190] "Configuration changes detected, backend reload required"
	I1212 22:12:14.896621       7 controller.go:210] "Backend successfully reloaded"
	I1212 22:12:14.897238       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7c6974c4d8-ltlr9", UID:"b021eb0d-e2ce-4b73-a55c-a6a2d91eeae1", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	* 
	* ==> coredns [77034dbccca8] <==
	* [INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47441 - 35400 "HINFO IN 2701208162996911597.6715474359507047661. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027791739s
	[INFO] 10.244.0.8:55701 - 18850 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000388854s
	[INFO] 10.244.0.8:55701 - 47530 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000492441s
	[INFO] 10.244.0.8:59034 - 40565 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163881s
	[INFO] 10.244.0.8:59034 - 17515 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100188s
	[INFO] 10.244.0.8:46103 - 38340 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000325961s
	[INFO] 10.244.0.8:46103 - 23239 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152082s
	[INFO] 10.244.0.8:50629 - 7674 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000264468s
	[INFO] 10.244.0.8:50629 - 46076 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106388s
	[INFO] 10.244.0.8:51888 - 33961 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058193s
	[INFO] 10.244.0.8:40249 - 30275 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000275567s
	[INFO] 10.244.0.8:52046 - 64983 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000308163s
	[INFO] 10.244.0.8:46668 - 23621 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000218474s
	[INFO] 10.244.0.21:33952 - 39304 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000329891s
	[INFO] 10.244.0.21:39975 - 6471 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000092725s
	[INFO] 10.244.0.21:60356 - 40723 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129535s
	[INFO] 10.244.0.21:47831 - 50743 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093626s
	[INFO] 10.244.0.21:58077 - 1782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128435s
	[INFO] 10.244.0.21:40474 - 43110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001646152s
	[INFO] 10.244.0.21:60985 - 17585 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.003089047s
	[INFO] 10.244.0.21:55572 - 17059 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.00349826s
	[INFO] 10.244.0.24:33809 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00040999s
	[INFO] 10.244.0.24:35351 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014053s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-310200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-310200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=addons-310200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_08_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-310200
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-310200"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-310200
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:12:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:12:08 +0000   Tue, 12 Dec 2023 22:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:12:08 +0000   Tue, 12 Dec 2023 22:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:12:08 +0000   Tue, 12 Dec 2023 22:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:12:08 +0000   Tue, 12 Dec 2023 22:08:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.52.75
	  Hostname:    addons-310200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	System Info:
	  Machine ID:                 846859fcd23d455db6b4273e40c2e5e7
	  System UUID:                481114b0-d7d2-da43-a3fd-0a4227a05bb2
	  Boot ID:                    0846bcf7-b060-420d-9f67-13491efb7010
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-h6h7r      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gcp-auth                    gcp-auth-d4c87556c-j7l7m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-ltlr9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-5dd5756b68-btflp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 csi-hostpathplugin-fh24n                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-addons-310200                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-addons-310200                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-addons-310200        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-proxy-hhqts                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-addons-310200                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 nvidia-device-plugin-daemonset-prm68         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 snapshot-controller-58dbcc7b99-px9hj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 snapshot-controller-58dbcc7b99-xv7b4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 tiller-deploy-7b677967b9-gggqt               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  local-path-storage          local-path-provisioner-78b46b4d5c-s8c4n      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node addons-310200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node addons-310200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node addons-310200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node addons-310200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node addons-310200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node addons-310200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m16s                  kubelet          Node addons-310200 status is now: NodeReady
	  Normal  RegisteredNode           4m7s                   node-controller  Node addons-310200 event: Registered Node addons-310200 in Controller
	
	* 
	* ==> dmesg <==
	* [  +1.341668] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.349651] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.170246] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.166260] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.178313] systemd-fstab-generator[1196]: Ignoring "noauto" for root device
	[  +0.211215] systemd-fstab-generator[1210]: Ignoring "noauto" for root device
	[ +10.263170] systemd-fstab-generator[1314]: Ignoring "noauto" for root device
	[  +5.587479] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.609727] systemd-fstab-generator[1682]: Ignoring "noauto" for root device
	[  +1.259583] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.584244] systemd-fstab-generator[2632]: Ignoring "noauto" for root device
	[Dec12 22:08] hrtimer: interrupt took 2558478 ns
	[ +17.234279] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.296651] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.379413] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.040996] kauditd_printk_skb: 44 callbacks suppressed
	[Dec12 22:10] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.402009] kauditd_printk_skb: 26 callbacks suppressed
	[Dec12 22:11] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.138359] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.108874] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.413743] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.183371] kauditd_printk_skb: 5 callbacks suppressed
	[Dec12 22:12] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.634561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [5b2b6a761e89] <==
	* {"level":"info","ts":"2023-12-12T22:09:34.643503Z","caller":"traceutil/trace.go:171","msg":"trace[176602577] transaction","detail":"{read_only:false; response_revision:938; number_of_response:1; }","duration":"161.827381ms","start":"2023-12-12T22:09:34.481667Z","end":"2023-12-12T22:09:34.643494Z","steps":["trace[176602577] 'process raft request'  (duration: 161.32326ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:09:34.643734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.047138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13482"}
	{"level":"info","ts":"2023-12-12T22:09:34.64376Z","caller":"traceutil/trace.go:171","msg":"trace[1277323872] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:938; }","duration":"280.083832ms","start":"2023-12-12T22:09:34.363669Z","end":"2023-12-12T22:09:34.643753Z","steps":["trace[1277323872] 'agreement among raft nodes before linearized reading'  (duration: 280.006944ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:09:34.643885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.481392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10569"}
	{"level":"info","ts":"2023-12-12T22:09:34.643904Z","caller":"traceutil/trace.go:171","msg":"trace[543940249] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:938; }","duration":"420.501789ms","start":"2023-12-12T22:09:34.223397Z","end":"2023-12-12T22:09:34.643899Z","steps":["trace[543940249] 'agreement among raft nodes before linearized reading'  (duration: 420.454196ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:09:34.643921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T22:09:34.22339Z","time spent":"420.525985ms","remote":"127.0.0.1:35552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10593,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2023-12-12T22:10:02.986652Z","caller":"traceutil/trace.go:171","msg":"trace[1767571344] linearizableReadLoop","detail":"{readStateIndex:1038; appliedIndex:1037; }","duration":"124.93127ms","start":"2023-12-12T22:10:02.861703Z","end":"2023-12-12T22:10:02.986635Z","steps":["trace[1767571344] 'read index received'  (duration: 124.697092ms)","trace[1767571344] 'applied index is now lower than readState.Index'  (duration: 233.378µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:10:02.986919Z","caller":"traceutil/trace.go:171","msg":"trace[491456064] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"236.220331ms","start":"2023-12-12T22:10:02.750687Z","end":"2023-12-12T22:10:02.986908Z","steps":["trace[491456064] 'process raft request'  (duration: 235.762774ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:10:02.987168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.532613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13482"}
	{"level":"info","ts":"2023-12-12T22:10:02.987231Z","caller":"traceutil/trace.go:171","msg":"trace[1853011499] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1000; }","duration":"125.610006ms","start":"2023-12-12T22:10:02.861614Z","end":"2023-12-12T22:10:02.987224Z","steps":["trace[1853011499] 'agreement among raft nodes before linearized reading'  (duration: 125.423723ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:10:02.98738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.155669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-12T22:10:02.987402Z","caller":"traceutil/trace.go:171","msg":"trace[1208765848] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:1000; }","duration":"110.181066ms","start":"2023-12-12T22:10:02.877215Z","end":"2023-12-12T22:10:02.987396Z","steps":["trace[1208765848] 'agreement among raft nodes before linearized reading'  (duration: 110.133371ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:10:07.33637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.710102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82135"}
	{"level":"info","ts":"2023-12-12T22:10:07.336717Z","caller":"traceutil/trace.go:171","msg":"trace[962237078] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1013; }","duration":"150.83729ms","start":"2023-12-12T22:10:07.185614Z","end":"2023-12-12T22:10:07.336451Z","steps":["trace[962237078] 'range keys from in-memory index tree'  (duration: 150.334935ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:10:07.338463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.938063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10569"}
	{"level":"info","ts":"2023-12-12T22:10:07.338658Z","caller":"traceutil/trace.go:171","msg":"trace[687316810] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1013; }","duration":"107.121547ms","start":"2023-12-12T22:10:07.231509Z","end":"2023-12-12T22:10:07.338631Z","steps":["trace[687316810] 'range keys from in-memory index tree'  (duration: 106.777878ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:10:24.337206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.237362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10569"}
	{"level":"info","ts":"2023-12-12T22:10:24.337268Z","caller":"traceutil/trace.go:171","msg":"trace[1239780719] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1088; }","duration":"116.320156ms","start":"2023-12-12T22:10:24.220941Z","end":"2023-12-12T22:10:24.337261Z","steps":["trace[1239780719] 'agreement among raft nodes before linearized reading'  (duration: 116.172167ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:10:24.336978Z","caller":"traceutil/trace.go:171","msg":"trace[610062638] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1132; }","duration":"115.99058ms","start":"2023-12-12T22:10:24.220964Z","end":"2023-12-12T22:10:24.336954Z","steps":["trace[610062638] 'read index received'  (duration: 115.7196ms)","trace[610062638] 'applied index is now lower than readState.Index'  (duration: 266.181µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:10:24.339353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.10277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:10:24.339411Z","caller":"traceutil/trace.go:171","msg":"trace[1869273567] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"105.145267ms","start":"2023-12-12T22:10:24.234241Z","end":"2023-12-12T22:10:24.339386Z","steps":["trace[1869273567] 'agreement among raft nodes before linearized reading'  (duration: 105.087771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:11:17.895955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.483225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4147"}
	{"level":"info","ts":"2023-12-12T22:11:17.89667Z","caller":"traceutil/trace.go:171","msg":"trace[1539167569] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1241; }","duration":"172.205036ms","start":"2023-12-12T22:11:17.724451Z","end":"2023-12-12T22:11:17.896657Z","steps":["trace[1539167569] 'range keys from in-memory index tree'  (duration: 171.406102ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:11:48.398462Z","caller":"traceutil/trace.go:171","msg":"trace[736312819] transaction","detail":"{read_only:false; response_revision:1419; number_of_response:1; }","duration":"226.967399ms","start":"2023-12-12T22:11:48.171478Z","end":"2023-12-12T22:11:48.398445Z","steps":["trace[736312819] 'process raft request'  (duration: 226.58133ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:11:57.09921Z","caller":"traceutil/trace.go:171","msg":"trace[921431229] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1441; }","duration":"101.675519ms","start":"2023-12-12T22:11:56.997518Z","end":"2023-12-12T22:11:57.099193Z","steps":["trace[921431229] 'process raft request'  (duration: 101.327565ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [655bb0820410] <==
	* 2023/12/12 22:11:21 GCP Auth Webhook started!
	2023/12/12 22:11:24 Ready to marshal response ...
	2023/12/12 22:11:24 Ready to write response ...
	2023/12/12 22:11:24 Ready to marshal response ...
	2023/12/12 22:11:24 Ready to write response ...
	2023/12/12 22:11:33 Ready to marshal response ...
	2023/12/12 22:11:33 Ready to write response ...
	2023/12/12 22:11:43 Ready to marshal response ...
	2023/12/12 22:11:43 Ready to write response ...
	2023/12/12 22:11:48 Ready to marshal response ...
	2023/12/12 22:11:48 Ready to write response ...
	2023/12/12 22:12:11 Ready to marshal response ...
	2023/12/12 22:12:11 Ready to write response ...
	2023/12/12 22:12:17 Ready to marshal response ...
	2023/12/12 22:12:17 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:12:19 up 6 min,  0 users,  load average: 2.36, 2.43, 1.17
	Linux addons-310200 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6cd85fa7474d] <==
	* I1212 22:08:57.177219       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 22:09:43.856527       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 22:09:43.856618       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 22:09:43.856630       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 22:09:43.857808       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 22:09:43.857947       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 22:09:43.857959       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 22:09:57.176832       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 22:10:03.639322       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 22:10:03.639379       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1212 22:10:03.639390       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.199.7:443: connect: connection refused
	I1212 22:10:03.640093       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 22:10:03.644113       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.199.7:443: connect: connection refused
	E1212 22:10:03.648712       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.199.7:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.199.7:443: connect: connection refused
	I1212 22:10:03.747167       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 22:10:57.190716       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 22:12:04.659514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 22:12:05.095168       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1212 22:12:05.115789       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1212 22:12:06.147434       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1212 22:12:08.670426       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 22:12:11.408120       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 22:12:11.984449       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.255.93"}
	
	* 
	* ==> kube-controller-manager [0903615daf3c] <==
	* I1212 22:11:27.771203       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:11:27.772604       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:11:34.351417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="28.680198ms"
	I1212 22:11:34.351781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="293.265µs"
	I1212 22:11:42.770788       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:11:42.829429       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:11:43.477043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="4.901µs"
	I1212 22:11:57.103034       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="6.801µs"
	E1212 22:12:06.150409       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:12:06.752732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.6µs"
	W1212 22:12:07.080385       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:12:07.080493       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:12:09.758232       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:12:09.759031       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:12:10.686774       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:12:10.687022       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:12:12.772883       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:12:13.355066       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1212 22:12:13.355102       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 22:12:13.732944       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1212 22:12:13.733079       1 shared_informer.go:318] Caches are synced for garbage collector
	W1212 22:12:14.252772       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:12:14.252853       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:12:15.432499       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1212 22:12:16.363892       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [4cf2c2f848ea] <==
	* I1212 22:08:26.069522       1 server_others.go:69] "Using iptables proxy"
	I1212 22:08:26.207666       1 node.go:141] Successfully retrieved node IP: 172.30.52.75
	I1212 22:08:26.616304       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:08:26.616371       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:08:26.648462       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:08:26.648754       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:08:26.649068       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:08:26.649089       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:08:26.695231       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:08:26.695273       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:08:26.695324       1 config.go:188] "Starting service config controller"
	I1212 22:08:26.695335       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:08:26.696816       1 config.go:315] "Starting node config controller"
	I1212 22:08:26.697084       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:08:26.811647       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 22:08:26.812889       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:08:26.827648       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f1cf04cc2113] <==
	* W1212 22:07:57.367008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:07:57.367018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:07:57.368065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:07:57.368104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 22:07:57.368601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:07:57.369141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 22:07:58.210306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:07:58.210354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:07:58.237143       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:07:58.237250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 22:07:58.250244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:07:58.252443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:07:58.358118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:07:58.358145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:07:58.431948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:07:58.432251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:07:58.478734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:07:58.478834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 22:07:58.521324       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:07:58.521384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:07:58.664250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:07:58.664492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:07:58.807913       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:07:58.808219       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 22:08:01.256303       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:06:05 UTC, ends at Tue 2023-12-12 22:12:19 UTC. --
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920037    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2587dc96-1f27-485a-a57c-af6b85d780c7" containerName="helper-pod"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920057    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ebbb0e34-e907-46c7-bb28-c282ed55fb13" containerName="registry-proxy"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920068    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920077    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920086    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920094    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920104    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce060e5a-4538-49fd-b48d-35ccd21eb735" containerName="registry"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: E1212 22:12:11.920113    2659 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1170ed40-b3e5-4cc4-af55-e33a6818ce5c" containerName="task-pv-container"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920156    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="1170ed40-b3e5-4cc4-af55-e33a6818ce5c" containerName="task-pv-container"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920166    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920174    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920183    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920191    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="ce060e5a-4538-49fd-b48d-35ccd21eb735" containerName="registry"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920200    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="ebbb0e34-e907-46c7-bb28-c282ed55fb13" containerName="registry-proxy"
	Dec 12 22:12:11 addons-310200 kubelet[2659]: I1212 22:12:11.920208    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="2587dc96-1f27-485a-a57c-af6b85d780c7" containerName="helper-pod"
	Dec 12 22:12:12 addons-310200 kubelet[2659]: I1212 22:12:12.041848    2659 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8497723e-3460-40dd-b7e1-48f6f9048981-gcp-creds\") pod \"nginx\" (UID: \"8497723e-3460-40dd-b7e1-48f6f9048981\") " pod="default/nginx"
	Dec 12 22:12:12 addons-310200 kubelet[2659]: I1212 22:12:12.042208    2659 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvg9\" (UniqueName: \"kubernetes.io/projected/8497723e-3460-40dd-b7e1-48f6f9048981-kube-api-access-8dvg9\") pod \"nginx\" (UID: \"8497723e-3460-40dd-b7e1-48f6f9048981\") " pod="default/nginx"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.694863    2659 topology_manager.go:215] "Topology Admit Handler" podUID="3780c11b-c8dc-4f63-851a-9b609d12f72d" podNamespace="default" podName="task-pv-pod-restore"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.698176    2659 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8d5b875-bf2c-4e6e-8839-29b95c7833e4" containerName="gadget"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.812439    2659 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0b61495-1fdb-455f-9709-e448a16df687\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8257adbe-993b-11ee-846c-7e464575522a\") pod \"task-pv-pod-restore\" (UID: \"3780c11b-c8dc-4f63-851a-9b609d12f72d\") " pod="default/task-pv-pod-restore"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.812495    2659 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3780c11b-c8dc-4f63-851a-9b609d12f72d-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"3780c11b-c8dc-4f63-851a-9b609d12f72d\") " pod="default/task-pv-pod-restore"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.812651    2659 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdn4\" (UniqueName: \"kubernetes.io/projected/3780c11b-c8dc-4f63-851a-9b609d12f72d-kube-api-access-8hdn4\") pod \"task-pv-pod-restore\" (UID: \"3780c11b-c8dc-4f63-851a-9b609d12f72d\") " pod="default/task-pv-pod-restore"
	Dec 12 22:12:17 addons-310200 kubelet[2659]: I1212 22:12:17.931735    2659 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-d0b61495-1fdb-455f-9709-e448a16df687\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8257adbe-993b-11ee-846c-7e464575522a\") pod \"task-pv-pod-restore\" (UID: \"3780c11b-c8dc-4f63-851a-9b609d12f72d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/24339b581fc45ebfdc1c1f03b520ae269f02e0171334eed7128ec8a5d0cc45c3/globalmount\"" pod="default/task-pv-pod-restore"
	Dec 12 22:12:18 addons-310200 kubelet[2659]: I1212 22:12:18.966095    2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1e9f752343ccb9e307c313970155cd5eb2b8f394260b2c16a8530a3fcb7c11"
	Dec 12 22:12:19 addons-310200 kubelet[2659]: I1212 22:12:19.015904    2659 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=3.276670919 podCreationTimestamp="2023-12-12 22:12:11 +0000 UTC" firstStartedPulling="2023-12-12 22:12:13.058238388 +0000 UTC m=+252.187864253" lastFinishedPulling="2023-12-12 22:12:17.797414468 +0000 UTC m=+256.927040233" observedRunningTime="2023-12-12 22:12:19.014133513 +0000 UTC m=+258.143759278" watchObservedRunningTime="2023-12-12 22:12:19.015846899 +0000 UTC m=+258.145472764"
	
	* 
	* ==> storage-provisioner [9ddfa0190527] <==
	* I1212 22:08:49.790942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:08:49.810013       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:08:49.810061       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:08:49.820795       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:08:49.821113       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-310200_e073555a-a4ad-45f8-99ea-2991a7dd4f22!
	I1212 22:08:49.821175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b04a0c5-be1c-4632-b90d-458a639dbc56", APIVersion:"v1", ResourceVersion:"809", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-310200_e073555a-a4ad-45f8-99ea-2991a7dd4f22 became leader
	I1212 22:08:49.921679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-310200_e073555a-a4ad-45f8-99ea-2991a7dd4f22!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:12:10.963944   14576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-310200 -n addons-310200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-310200 -n addons-310200: (12.7680753s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-310200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-whlkv ingress-nginx-admission-patch-mcqfn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-310200 describe pod ingress-nginx-admission-create-whlkv ingress-nginx-admission-patch-mcqfn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-310200 describe pod ingress-nginx-admission-create-whlkv ingress-nginx-admission-patch-mcqfn: exit status 1 (186.1379ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whlkv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mcqfn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-310200 describe pod ingress-nginx-admission-create-whlkv ingress-nginx-admission-patch-mcqfn: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.86s)

                                                
                                    
x
+
TestCertExpiration (969.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-764000 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-764000 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m32.6801205s)
E1213 00:03:56.428853   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1213 00:05:53.183715   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-764000 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-764000 --memory=2048 --cert-expiration=8760h --driver=hyperv: exit status 90 (3m16.0978806s)

                                                
                                                
-- stdout --
	* [cert-expiration-764000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node cert-expiration-764000 in cluster cert-expiration-764000
	* Updating the running hyperv "cert-expiration-764000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:06:42.415441    8256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Wed 2023-12-13 00:01:25 UTC, ends at Wed 2023-12-13 00:09:58 UTC. --
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.170789358Z" level=info msg="Starting up"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.171687760Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.173052563Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.211567058Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238058823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238165223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240582129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240703030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240985930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241086630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241188931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241337031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241436031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241594032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242023733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242128833Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242147133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242299633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242390534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242462034Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242535234Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252200758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252308958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252331958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252366958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252384458Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252396658Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252411658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252586859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252685859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252708259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252723959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252739759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252812459Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252833959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252848459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252867259Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252883059Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252906460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252923560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253020760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253489061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253623461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253648061Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253672361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253725262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253869862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253892062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253906762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253921562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253935762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253949062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253962962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253979062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254040862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254137163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254159063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254177863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254192863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254209763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254223863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254236763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254252863Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254265263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254277763Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254526164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254713064Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254802964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254897664Z" level=info msg="containerd successfully booted in 0.046035s"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.287694945Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.306157191Z" level=info msg="Loading containers: start."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.528390037Z" level=info msg="Loading containers: done."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546109381Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546138181Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546146281Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546153281Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546174081Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546280581Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.602986421Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.603109221Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.466288561Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468163761Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468227961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468304161Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468573361Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:02:49 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.538052161Z" level=info msg="Starting up"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.539012361Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.540190961Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1015
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.577046661Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600148661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600189061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602660161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602851261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603116061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603246561Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603278261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603302361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603314161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603337961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603494561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603595061Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603613261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603968361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604023361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604045061Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604057161Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604191061Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604271861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604287961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604312261Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604327961Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604338261Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604351361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604396861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604433061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604448961Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604461861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604493361Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604512061Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604525861Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604538561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604551661Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604568861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604602761Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604614761Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604655461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605593561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605726161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605906361Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606004761Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606147261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606300661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606401361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606459361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606519361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606692361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606841361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606925661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.607592561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608179461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608278661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608298761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608313461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608332861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608348361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608363261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608375561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608392561Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608406061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608417461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608700761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608847161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608922561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608963961Z" level=info msg="containerd successfully booted in 0.033071s"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.638998761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.647732061Z" level=info msg="Loading containers: start."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.811806461Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.884143661Z" level=info msg="Loading containers: done."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901144161Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901169061Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901177061Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901186161Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901260061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901300561Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942317361Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942459061Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.839751361Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:03:04 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841529461Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841543661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841603961Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841791861Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.922000961Z" level=info msg="Starting up"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.924912161Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.925982861Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1325
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.958071161Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985579561Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985765461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988633261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988765961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989028361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989116961Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989146061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989169261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989181561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989248461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989395161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989487661Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989506761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989684561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989844561Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989871861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989885861Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990135961Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990283161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990303061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990327761Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990342161Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990353561Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990365061Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990411261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990448561Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990464261Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990478061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990490861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990507361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990521561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990534561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990547561Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990571961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990589261Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990601561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990641661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.991998461Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992153061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992182161Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992299461Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992381061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992479461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992502661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992519561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992536461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992553861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992570261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992653561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992837961Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993066061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993111261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993147961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993166461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993240261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993313761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993341661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993375761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993396261Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993428861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993443761Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994605261Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994837961Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994898861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994964961Z" level=info msg="containerd successfully booted in 0.038484s"
	Dec 13 00:03:06 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:06.261082461Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.207333561Z" level=info msg="Loading containers: start."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.377587761Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.449019961Z" level=info msg="Loading containers: done."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469345661Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469369461Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469377361Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469384461Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469403061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469443061Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.501343561Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:03:07 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.502383261Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578415718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578724604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578983692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.579293778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.658608094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660012328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660473207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660505105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674371261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674560953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674601251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674911636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716086924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716411009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716514404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716581501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.362991227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363074623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363146720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.366402578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.733319501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734085067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734357155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734546647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688627673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688722069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688762967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688781866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.709943202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710449082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710708271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710885564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.160881848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162670131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162933629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.163099927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.224942455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225402651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225599349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225801847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451468859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451562458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451631358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451679457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263304600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263476598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263523698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263571997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354503609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354940505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355161803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355364301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.845510750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846166344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846672440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.847001337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:04:13.212492609Z" level=info msg="ignoring event" container=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213315508Z" level=info msg="shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213954208Z" level=warning msg="cleaning up after shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.214015308Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152650118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152916217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152987317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.153059317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.570287799Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:08:46 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.830384447Z" level=info msg="ignoring event" container=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.831207950Z" level=info msg="shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833616657Z" level=warning msg="cleaning up after shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833822758Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.898561469Z" level=info msg="shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903175684Z" level=warning msg="cleaning up after shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903250084Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904690389Z" level=info msg="shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904782889Z" level=warning msg="cleaning up after shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904816689Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.907284697Z" level=info msg="ignoring event" container=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.909262304Z" level=info msg="ignoring event" container=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.031959304Z" level=info msg="shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032343505Z" level=warning msg="cleaning up after shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032726606Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034706813Z" level=info msg="ignoring event" container=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034823313Z" level=info msg="ignoring event" container=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035590516Z" level=info msg="shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035808916Z" level=warning msg="cleaning up after shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035944817Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.123795903Z" level=info msg="ignoring event" container=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125379808Z" level=info msg="shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125602409Z" level=warning msg="cleaning up after shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125776509Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.128588619Z" level=info msg="ignoring event" container=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.128300918Z" level=info msg="shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130847326Z" level=warning msg="cleaning up after shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130965626Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.175488071Z" level=info msg="ignoring event" container=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.176164374Z" level=info msg="ignoring event" container=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.177352878Z" level=info msg="ignoring event" container=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.178226080Z" level=info msg="ignoring event" container=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178606882Z" level=info msg="shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178716682Z" level=warning msg="cleaning up after shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178741682Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179001583Z" level=info msg="shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179034083Z" level=warning msg="cleaning up after shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179042983Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179198184Z" level=info msg="shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179224284Z" level=warning msg="cleaning up after shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179232584Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179389884Z" level=info msg="shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179444284Z" level=warning msg="cleaning up after shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179617185Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.251878020Z" level=info msg="ignoring event" container=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252605823Z" level=info msg="shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252677723Z" level=warning msg="cleaning up after shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252692423Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.397895696Z" level=warning msg="cleanup warnings time=\"2023-12-13T00:08:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:51.781962083Z" level=info msg="ignoring event" container=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783243787Z" level=info msg="shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783368887Z" level=warning msg="cleaning up after shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783433087Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:56 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:56.948007044Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.034888503Z" level=info msg="ignoring event" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037282983Z" level=info msg="shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037722280Z" level=warning msg="cleaning up after shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037755580Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.090830251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091777244Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091931643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.092156641Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:08:58 cert-expiration-764000 dockerd[7577]: time="2023-12-13T00:08:58.210551264Z" level=info msg="Starting up"
	Dec 13 00:09:58 cert-expiration-764000 dockerd[7577]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-764000 --memory=2048 --cert-expiration=8760h --driver=hyperv" : exit status 90
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-764000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node cert-expiration-764000 in cluster cert-expiration-764000
	* Updating the running hyperv "cert-expiration-764000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:06:42.415441    8256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Wed 2023-12-13 00:01:25 UTC, ends at Wed 2023-12-13 00:09:58 UTC. --
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.170789358Z" level=info msg="Starting up"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.171687760Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.173052563Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.211567058Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238058823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238165223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240582129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240703030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240985930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241086630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241188931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241337031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241436031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241594032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242023733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242128833Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242147133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242299633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242390534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242462034Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242535234Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252200758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252308958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252331958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252366958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252384458Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252396658Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252411658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252586859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252685859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252708259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252723959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252739759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252812459Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252833959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252848459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252867259Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252883059Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252906460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252923560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253020760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253489061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253623461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253648061Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253672361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253725262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253869862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253892062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253906762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253921562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253935762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253949062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253962962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253979062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254040862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254137163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254159063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254177863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254192863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254209763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254223863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254236763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254252863Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254265263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254277763Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254526164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254713064Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254802964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254897664Z" level=info msg="containerd successfully booted in 0.046035s"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.287694945Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.306157191Z" level=info msg="Loading containers: start."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.528390037Z" level=info msg="Loading containers: done."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546109381Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546138181Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546146281Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546153281Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546174081Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546280581Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.602986421Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.603109221Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.466288561Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468163761Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468227961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468304161Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468573361Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:02:49 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.538052161Z" level=info msg="Starting up"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.539012361Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.540190961Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1015
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.577046661Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600148661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600189061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602660161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602851261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603116061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603246561Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603278261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603302361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603314161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603337961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603494561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603595061Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603613261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603968361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604023361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604045061Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604057161Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604191061Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604271861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604287961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604312261Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604327961Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604338261Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604351361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604396861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604433061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604448961Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604461861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604493361Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604512061Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604525861Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604538561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604551661Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604568861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604602761Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604614761Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604655461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605593561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605726161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605906361Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606004761Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606147261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606300661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606401361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606459361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606519361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606692361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606841361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606925661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.607592561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608179461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608278661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608298761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608313461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608332861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608348361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608363261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608375561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608392561Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608406061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608417461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608700761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608847161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608922561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608963961Z" level=info msg="containerd successfully booted in 0.033071s"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.638998761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.647732061Z" level=info msg="Loading containers: start."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.811806461Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.884143661Z" level=info msg="Loading containers: done."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901144161Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901169061Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901177061Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901186161Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901260061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901300561Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942317361Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942459061Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.839751361Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:03:04 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841529461Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841543661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841603961Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841791861Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.922000961Z" level=info msg="Starting up"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.924912161Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.925982861Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1325
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.958071161Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985579561Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985765461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988633261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988765961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989028361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989116961Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989146061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989169261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989181561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989248461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989395161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989487661Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989506761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989684561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989844561Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989871861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989885861Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990135961Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990283161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990303061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990327761Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990342161Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990353561Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990365061Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990411261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990448561Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990464261Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990478061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990490861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990507361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990521561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990534561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990547561Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990571961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990589261Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990601561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990641661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.991998461Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992153061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992182161Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992299461Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992381061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992479461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992502661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992519561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992536461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992553861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992570261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992653561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992837961Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993066061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993111261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993147961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993166461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993240261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993313761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993341661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993375761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993396261Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993428861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993443761Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994605261Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994837961Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994898861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994964961Z" level=info msg="containerd successfully booted in 0.038484s"
	Dec 13 00:03:06 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:06.261082461Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.207333561Z" level=info msg="Loading containers: start."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.377587761Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.449019961Z" level=info msg="Loading containers: done."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469345661Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469369461Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469377361Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469384461Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469403061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469443061Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.501343561Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:03:07 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.502383261Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578415718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578724604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578983692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.579293778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.658608094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660012328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660473207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660505105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674371261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674560953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674601251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674911636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716086924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716411009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716514404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716581501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.362991227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363074623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363146720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.366402578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.733319501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734085067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734357155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734546647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688627673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688722069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688762967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688781866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.709943202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710449082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710708271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710885564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.160881848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162670131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162933629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.163099927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.224942455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225402651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225599349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225801847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451468859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451562458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451631358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451679457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263304600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263476598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263523698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263571997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354503609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354940505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355161803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355364301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.845510750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846166344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846672440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.847001337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:04:13.212492609Z" level=info msg="ignoring event" container=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213315508Z" level=info msg="shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213954208Z" level=warning msg="cleaning up after shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.214015308Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152650118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152916217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152987317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.153059317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.570287799Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:08:46 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.830384447Z" level=info msg="ignoring event" container=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.831207950Z" level=info msg="shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833616657Z" level=warning msg="cleaning up after shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833822758Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.898561469Z" level=info msg="shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903175684Z" level=warning msg="cleaning up after shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903250084Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904690389Z" level=info msg="shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904782889Z" level=warning msg="cleaning up after shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904816689Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.907284697Z" level=info msg="ignoring event" container=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.909262304Z" level=info msg="ignoring event" container=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.031959304Z" level=info msg="shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032343505Z" level=warning msg="cleaning up after shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032726606Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034706813Z" level=info msg="ignoring event" container=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034823313Z" level=info msg="ignoring event" container=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035590516Z" level=info msg="shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035808916Z" level=warning msg="cleaning up after shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035944817Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.123795903Z" level=info msg="ignoring event" container=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125379808Z" level=info msg="shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125602409Z" level=warning msg="cleaning up after shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125776509Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.128588619Z" level=info msg="ignoring event" container=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.128300918Z" level=info msg="shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130847326Z" level=warning msg="cleaning up after shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130965626Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.175488071Z" level=info msg="ignoring event" container=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.176164374Z" level=info msg="ignoring event" container=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.177352878Z" level=info msg="ignoring event" container=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.178226080Z" level=info msg="ignoring event" container=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178606882Z" level=info msg="shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178716682Z" level=warning msg="cleaning up after shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178741682Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179001583Z" level=info msg="shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179034083Z" level=warning msg="cleaning up after shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179042983Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179198184Z" level=info msg="shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179224284Z" level=warning msg="cleaning up after shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179232584Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179389884Z" level=info msg="shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179444284Z" level=warning msg="cleaning up after shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179617185Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.251878020Z" level=info msg="ignoring event" container=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252605823Z" level=info msg="shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252677723Z" level=warning msg="cleaning up after shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252692423Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.397895696Z" level=warning msg="cleanup warnings time=\"2023-12-13T00:08:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:51.781962083Z" level=info msg="ignoring event" container=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783243787Z" level=info msg="shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783368887Z" level=warning msg="cleaning up after shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783433087Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:56 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:56.948007044Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.034888503Z" level=info msg="ignoring event" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037282983Z" level=info msg="shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037722280Z" level=warning msg="cleaning up after shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037755580Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.090830251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091777244Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091931643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.092156641Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:08:58 cert-expiration-764000 dockerd[7577]: time="2023-12-13T00:08:58.210551264Z" level=info msg="Starting up"
	Dec 13 00:09:58 cert-expiration-764000 dockerd[7577]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-12-13 00:09:58.6286231 +0000 UTC m=+7556.653312401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-764000 -n cert-expiration-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-764000 -n cert-expiration-764000: exit status 2 (12.687737s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:09:58.777357     752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-764000 logs -n 25
E1213 00:10:53.190985   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1213 00:11:22.651362   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-764000 logs -n 25: (2m48.038796s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p test-preload-686300            | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:44 UTC | 12 Dec 23 23:48 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr                 |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false       |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4      |                           |                   |         |                     |                     |
	| image   | test-preload-686300 image pull    | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:48 UTC | 12 Dec 23 23:49 UTC |
	|         | gcr.io/k8s-minikube/busybox       |                           |                   |         |                     |                     |
	| stop    | -p test-preload-686300            | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:49 UTC | 12 Dec 23 23:49 UTC |
	| start   | -p test-preload-686300            | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:49 UTC | 12 Dec 23 23:52 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --wait=true --driver=hyperv       |                           |                   |         |                     |                     |
	| image   | test-preload-686300 image list    | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:52 UTC | 12 Dec 23 23:52 UTC |
	| delete  | -p test-preload-686300            | test-preload-686300       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:52 UTC | 12 Dec 23 23:52 UTC |
	| start   | -p scheduled-stop-667200          | scheduled-stop-667200     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:52 UTC | 12 Dec 23 23:55 UTC |
	|         | --memory=2048 --driver=hyperv     |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-667200          | scheduled-stop-667200     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:56 UTC |
	|         | --schedule 5m                     |                           |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-667200          | scheduled-stop-667200     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | -- sudo systemctl show            |                           |                   |         |                     |                     |
	|         | minikube-scheduled-stop           |                           |                   |         |                     |                     |
	|         | --no-page                         |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-667200          | scheduled-stop-667200     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | --schedule 5s                     |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-667200          | scheduled-stop-667200     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p NoKubernetes-665000            | NoKubernetes-665000       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --no-kubernetes                   |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p cert-expiration-764000         | cert-expiration-764000    | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:03 UTC |
	|         | --memory=2048                     |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m              |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p offline-docker-622300          | offline-docker-622300     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p force-systemd-flag-730500      | force-systemd-flag-730500 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2048 --force-systemd     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-665000            | NoKubernetes-665000       | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-730500         | force-systemd-flag-730500 | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | ssh docker info --format          |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-730500      | force-systemd-flag-730500 | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:02 UTC |
	| start   | -p kubernetes-upgrade-120400      | kubernetes-upgrade-120400 | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:08 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-665000            | NoKubernetes-665000       | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:03 UTC |
	| delete  | -p offline-docker-622300          | offline-docker-622300     | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:06 UTC | 13 Dec 23 00:07 UTC |
	| start   | -p cert-expiration-764000         | cert-expiration-764000    | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:06 UTC |                     |
	|         | --memory=2048                     |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p stopped-upgrade-632600         | stopped-upgrade-632600    | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:07 UTC |                     |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-120400      | kubernetes-upgrade-120400 | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:08 UTC | 13 Dec 23 00:09 UTC |
	| start   | -p kubernetes-upgrade-120400      | kubernetes-upgrade-120400 | minikube7\jenkins | v1.32.0 | 13 Dec 23 00:09 UTC |                     |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	|---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:09:11
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:09:11.617669    1552 out.go:296] Setting OutFile to fd 1616 ...
	I1213 00:09:11.618213    1552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:09:11.618274    1552 out.go:309] Setting ErrFile to fd 1688...
	I1213 00:09:11.618323    1552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:09:11.648550    1552 out.go:303] Setting JSON to false
	I1213 00:09:11.655765    1552 start.go:128] hostinfo: {"hostname":"minikube7","uptime":79749,"bootTime":1702346402,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1213 00:09:11.655765    1552 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1213 00:09:11.695611    1552 out.go:177] * [kubernetes-upgrade-120400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1213 00:09:11.744446    1552 notify.go:220] Checking for updates...
	I1213 00:09:11.745624    1552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1213 00:09:11.746651    1552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:09:11.747725    1552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1213 00:09:11.794729    1552 out.go:177]   - MINIKUBE_LOCATION=17761
	I1213 00:09:11.795677    1552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:09:10.041565    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:10.041565    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:10.041565    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:13.182693    2416 main.go:141] libmachine: [stdout =====>] : 
	I1213 00:09:13.182693    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:11.797997    1552 config.go:182] Loaded profile config "kubernetes-upgrade-120400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1213 00:09:11.799669    1552 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:09:17.852202    1552 out.go:177] * Using the hyperv driver based on existing profile
	I1213 00:09:17.853214    1552 start.go:298] selected driver: hyperv
	I1213 00:09:17.853214    1552 start.go:902] validating driver "hyperv" against &{Name:kubernetes-upgrade-120400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-120400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.60.205 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:17.853396    1552 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:09:17.909859    1552 cni.go:84] Creating CNI manager for ""
	I1213 00:09:17.909986    1552 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 00:09:17.909986    1552 start_flags.go:323] config:
	{Name:kubernetes-upgrade-120400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-1204
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.60.205 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:17.910537    1552 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:09:17.912330    1552 out.go:177] * Starting control plane node kubernetes-upgrade-120400 in cluster kubernetes-upgrade-120400
	I1213 00:09:14.194733    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:17.183776    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:17.183776    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:17.184075    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:17.913701    1552 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1213 00:09:17.914720    1552 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1213 00:09:17.914956    1552 cache.go:56] Caching tarball of preloaded images
	I1213 00:09:17.914956    1552 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 00:09:17.915528    1552 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1213 00:09:17.915684    1552 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-120400\config.json ...
	I1213 00:09:17.919085    1552 start.go:365] acquiring machines lock for kubernetes-upgrade-120400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:09:19.840046    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:19.840046    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:19.843182    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:22.036313    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:22.036313    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:22.036313    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:24.565391    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:24.565391    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:24.565885    2416 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\stopped-upgrade-632600\config.json ...
	I1213 00:09:24.568487    2416 machine.go:88] provisioning docker machine ...
	I1213 00:09:24.568588    2416 buildroot.go:166] provisioning hostname "stopped-upgrade-632600"
	I1213 00:09:24.568673    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:26.817323    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:26.817485    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:26.817587    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:29.422863    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:29.422863    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:29.429900    2416 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:29.430647    2416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.188 22 <nil> <nil>}
	I1213 00:09:29.430647    2416 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-632600 && echo "stopped-upgrade-632600" | sudo tee /etc/hostname
	I1213 00:09:29.577944    2416 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-632600
	
	I1213 00:09:29.578005    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:31.865978    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:31.866315    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:31.866315    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:34.483869    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:34.484015    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:34.492511    2416 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:34.493151    2416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.188 22 <nil> <nil>}
	I1213 00:09:34.493151    2416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-632600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-632600/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-632600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:34.634966    2416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:34.635116    2416 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1213 00:09:34.635116    2416 buildroot.go:174] setting up certificates
	I1213 00:09:34.635116    2416 provision.go:83] configureAuth start
	I1213 00:09:34.635269    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:36.835254    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:36.835515    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:36.835515    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:39.474764    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:39.474952    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:39.475024    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:41.723979    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:41.724204    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:41.724204    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:44.346091    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:44.346091    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:44.346209    2416 provision.go:138] copyHostCerts
	I1213 00:09:44.346629    2416 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1213 00:09:44.346629    2416 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1213 00:09:44.347170    2416 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1213 00:09:44.348746    2416 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1213 00:09:44.348746    2416 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1213 00:09:44.349185    2416 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 00:09:44.349979    2416 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1213 00:09:44.349979    2416 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1213 00:09:44.350527    2416 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 00:09:44.351720    2416 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-632600 san=[172.30.61.188 172.30.61.188 localhost 127.0.0.1 minikube stopped-upgrade-632600]
	I1213 00:09:44.682598    2416 provision.go:172] copyRemoteCerts
	I1213 00:09:44.697078    2416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:44.698091    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:46.907805    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:46.908067    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:46.908067    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:49.400699    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:49.400699    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:49.401597    2416 sshutil.go:53] new ssh client: &{IP:172.30.61.188 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\stopped-upgrade-632600\id_rsa Username:docker}
	I1213 00:09:49.503094    2416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8058961s)
	I1213 00:09:49.503505    2416 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 00:09:49.522689    2416 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:49.540002    2416 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:09:49.555578    2416 provision.go:86] duration metric: configureAuth took 14.9203942s
	I1213 00:09:49.555578    2416 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:49.555578    2416 config.go:182] Loaded profile config "stopped-upgrade-632600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1213 00:09:49.556111    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:51.732538    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:51.732538    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:51.732538    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:58.247095    8256 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.7137313s)
	I1213 00:09:58.261994    8256 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 00:09:58.322831    8256 out.go:177] 
	W1213 00:09:58.323410    8256 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Wed 2023-12-13 00:01:25 UTC, ends at Wed 2023-12-13 00:09:58 UTC. --
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.170789358Z" level=info msg="Starting up"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.171687760Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.173052563Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.211567058Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238058823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.238165223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240582129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240703030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.240985930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241086630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241188931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241337031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241436031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.241594032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242023733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242128833Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242147133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242299633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242390534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242462034Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.242535234Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252200758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252308958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252331958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252366958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252384458Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252396658Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252411658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252586859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252685859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252708259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252723959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252739759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252812459Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252833959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252848459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252867259Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252883059Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252906460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.252923560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253020760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253489061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253623461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253648061Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253672361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253725262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253869862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253892062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253906762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253921562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253935762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253949062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253962962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.253979062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254040862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254137163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254159063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254177863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254192863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254209763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254223863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254236763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254252863Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254265263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254277763Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254526164Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254713064Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254802964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:18 cert-expiration-764000 dockerd[681]: time="2023-12-13T00:02:18.254897664Z" level=info msg="containerd successfully booted in 0.046035s"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.287694945Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.306157191Z" level=info msg="Loading containers: start."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.528390037Z" level=info msg="Loading containers: done."
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546109381Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546138181Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546146281Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546153281Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546174081Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.546280581Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.602986421Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:18 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:18.603109221Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:18 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.466288561Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468163761Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468227961Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468304161Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:02:49 cert-expiration-764000 dockerd[675]: time="2023-12-13T00:02:49.468573361Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:02:49 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.538052161Z" level=info msg="Starting up"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.539012361Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.540190961Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1015
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.577046661Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600148661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.600189061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602660161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.602851261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603116061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603246561Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603278261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603302361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603314161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603337961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603494561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603595061Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603613261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.603968361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604023361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604045061Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604057161Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604191061Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604271861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604287961Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604312261Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604327961Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604338261Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604351361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604396861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604433061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604448961Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604461861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604493361Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604512061Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604525861Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604538561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604551661Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604568861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604602761Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604614761Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.604655461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605593561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605726161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.605906361Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606004761Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606147261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606300661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606401361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606459361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606519361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606692361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606841361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.606925661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.607592561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608179461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608278661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608298761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608313461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608332861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608348361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608363261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608375561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608392561Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608406061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608417461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608700761Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608847161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608922561Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1015]: time="2023-12-13T00:02:50.608963961Z" level=info msg="containerd successfully booted in 0.033071s"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.638998761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.647732061Z" level=info msg="Loading containers: start."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.811806461Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.884143661Z" level=info msg="Loading containers: done."
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901144161Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901169061Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901177061Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901186161Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901260061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.901300561Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942317361Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:02:50 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:02:50.942459061Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:02:50 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.839751361Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:03:04 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841529461Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841543661Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841603961Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:03:04 cert-expiration-764000 dockerd[1009]: time="2023-12-13T00:03:04.841791861Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:03:05 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.922000961Z" level=info msg="Starting up"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.924912161Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:05.925982861Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1325
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.958071161Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985579561Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.985765461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988633261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.988765961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989028361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989116961Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989146061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989169261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989181561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989248461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989395161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989487661Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989506761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989684561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989844561Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989871861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.989885861Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990135961Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990283161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990303061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990327761Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990342161Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990353561Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990365061Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990411261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990448561Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990464261Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990478061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990490861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990507361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990521561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990534561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990547561Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990571961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990589261Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990601561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.990641661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.991998461Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992153061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992182161Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992299461Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992381061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992479461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992502661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992519561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992536461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992553861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992570261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992653561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.992837961Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993066061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993111261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993147961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993166461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993240261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993313761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993341661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993375761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993396261Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993428861Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.993443761Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994605261Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994837961Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994898861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 00:03:05 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:05.994964961Z" level=info msg="containerd successfully booted in 0.038484s"
	Dec 13 00:03:06 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:06.261082461Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.207333561Z" level=info msg="Loading containers: start."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.377587761Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.449019961Z" level=info msg="Loading containers: done."
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469345661Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469369461Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469377361Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469384461Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469403061Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.469443061Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.501343561Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:03:07 cert-expiration-764000 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:03:07 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:03:07.502383261Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578415718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578724604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.578983692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.579293778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.658608094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660012328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660473207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.660505105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674371261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674560953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674601251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.674911636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716086924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716411009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716514404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:16 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:16.716581501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.362991227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363074623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.363146720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.366402578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.733319501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734085067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734357155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:17 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:17.734546647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688627673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688722069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688762967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.688781866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.709943202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710449082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710708271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:18 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:18.710885564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.160881848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162670131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.162933629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.163099927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.224942455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225402651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225599349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.225801847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451468859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451562458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451631358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:41 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:41.451679457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263304600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263476598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263523698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.263571997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354503609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.354940505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355161803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.355364301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.845510750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846166344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.846672440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:03:42 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:03:42.847001337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:04:13.212492609Z" level=info msg="ignoring event" container=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213315508Z" level=info msg="shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.213954208Z" level=warning msg="cleaning up after shim disconnected" id=af73331775252acd3c43558bfdfbe1f879d87f557cfeef2c9c45e3e8b1ae5f66 namespace=moby
	Dec 13 00:04:13 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:13.214015308Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152650118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152916217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.152987317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 00:04:14 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:04:14.153059317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.570287799Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:08:46 cert-expiration-764000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.830384447Z" level=info msg="ignoring event" container=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.831207950Z" level=info msg="shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833616657Z" level=warning msg="cleaning up after shim disconnected" id=61dd97408c1b2a94530b024701815a9f113f6afcc6b8ba1a88585c56affb7370 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.833822758Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.898561469Z" level=info msg="shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903175684Z" level=warning msg="cleaning up after shim disconnected" id=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.903250084Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904690389Z" level=info msg="shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904782889Z" level=warning msg="cleaning up after shim disconnected" id=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:46.904816689Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.907284697Z" level=info msg="ignoring event" container=1525fa109464a8ad205ce9cc5e0d595668cf0921a26d8bb705e0b22aea77e410 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:46 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:46.909262304Z" level=info msg="ignoring event" container=77da3eb786c45579f7c36456230ac0e35b226ccdcc3d6db7beab3470792c668e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.031959304Z" level=info msg="shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032343505Z" level=warning msg="cleaning up after shim disconnected" id=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.032726606Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034706813Z" level=info msg="ignoring event" container=341fa7fe64c2a5bbe41d8a5f6f4a53c116aca970a02bcbdb5c81d6b6ef165d28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.034823313Z" level=info msg="ignoring event" container=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035590516Z" level=info msg="shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035808916Z" level=warning msg="cleaning up after shim disconnected" id=c69b930da6b1cb6c3db21563d12074bddb606f1af8f56afd96ffdbe8b80e24f1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.035944817Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.123795903Z" level=info msg="ignoring event" container=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125379808Z" level=info msg="shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125602409Z" level=warning msg="cleaning up after shim disconnected" id=18f33dc47a9db7daff955df0a8b161c25473a158c8c1bc727534dbb35a8e89fb namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.125776509Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.128588619Z" level=info msg="ignoring event" container=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.128300918Z" level=info msg="shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130847326Z" level=warning msg="cleaning up after shim disconnected" id=bff0ad2f8584c775dfd23780226f25e31aac9c0a6b5b704aeaf04256a773e9d8 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.130965626Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.175488071Z" level=info msg="ignoring event" container=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.176164374Z" level=info msg="ignoring event" container=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.177352878Z" level=info msg="ignoring event" container=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.178226080Z" level=info msg="ignoring event" container=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178606882Z" level=info msg="shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178716682Z" level=warning msg="cleaning up after shim disconnected" id=12d6e96685dcd9adb344820adf27f6362336c69c6cedcbd049a82137b195d5b7 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.178741682Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179001583Z" level=info msg="shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179034083Z" level=warning msg="cleaning up after shim disconnected" id=a5ebbd3eced68242050da9819de7fd81a8b84cf0a34bc48087b2b6e774c19fb0 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179042983Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179198184Z" level=info msg="shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179224284Z" level=warning msg="cleaning up after shim disconnected" id=1294e438e253cda8948fb4ca1f48abf63eeeecbbc8a855794a6aa229802077d1 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179232584Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179389884Z" level=info msg="shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179444284Z" level=warning msg="cleaning up after shim disconnected" id=2cad8316b6944522cd2e352856f2564a087fa4f61064cd71c158ef84b97f1730 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.179617185Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:47.251878020Z" level=info msg="ignoring event" container=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252605823Z" level=info msg="shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252677723Z" level=warning msg="cleaning up after shim disconnected" id=22b01cb39bb970b82b62b6551a69e4d33358a1b1d9c9fda9fc9948de93411b37 namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.252692423Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:47 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:47.397895696Z" level=warning msg="cleanup warnings time=\"2023-12-13T00:08:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:51.781962083Z" level=info msg="ignoring event" container=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783243787Z" level=info msg="shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783368887Z" level=warning msg="cleaning up after shim disconnected" id=de961a403bdd538f46e3ad2dc0208e09c8033576a8d0630e92ce680808bf6d01 namespace=moby
	Dec 13 00:08:51 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:51.783433087Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:56 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:56.948007044Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.034888503Z" level=info msg="ignoring event" container=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037282983Z" level=info msg="shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037722280Z" level=warning msg="cleaning up after shim disconnected" id=7426d08beb1c0f347efedee8d4c5f4ff0362f740affdbd19c2d5a051643dcc36 namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1325]: time="2023-12-13T00:08:57.037755580Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.090830251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091777244Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.091931643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:08:57 cert-expiration-764000 dockerd[1319]: time="2023-12-13T00:08:57.092156641Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: docker.service: Succeeded.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:08:58 cert-expiration-764000 dockerd[7577]: time="2023-12-13T00:08:58.210551264Z" level=info msg="Starting up"
	Dec 13 00:09:58 cert-expiration-764000 dockerd[7577]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 00:09:58.324498    8256 out.go:239] * 
	W1213 00:09:58.326485    8256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:09:58.327470    8256 out.go:177] 
	I1213 00:09:54.354591    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:54.354738    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:54.361347    2416 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:54.361623    2416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.188 22 <nil> <nil>}
	I1213 00:09:54.362174    2416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 00:09:54.508483    2416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 00:09:54.508483    2416 buildroot.go:70] root file system type: tmpfs
	I1213 00:09:54.508871    2416 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 00:09:54.508952    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:09:56.706133    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:09:56.706133    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:56.706296    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:09:59.453902    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:09:59.453902    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:09:59.461836    2416 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:59.462457    2416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.188 22 <nil> <nil>}
	I1213 00:09:59.462596    2416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 00:09:59.625506    2416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 00:09:59.625506    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:10:01.943453    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:10:01.943453    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:10:01.943453    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	I1213 00:10:04.691260    2416 main.go:141] libmachine: [stdout =====>] : 172.30.61.188
	
	I1213 00:10:04.691260    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:10:04.697963    2416 main.go:141] libmachine: Using SSH client type: native
	I1213 00:10:04.698608    2416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.61.188 22 <nil> <nil>}
	I1213 00:10:04.698608    2416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 00:10:05.858765    2416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 00:10:05.858765    2416 machine.go:91] provisioned docker machine in 41.2900926s
	I1213 00:10:05.858765    2416 start.go:300] post-start starting for "stopped-upgrade-632600" (driver="hyperv")
	I1213 00:10:05.858765    2416 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:10:05.874873    2416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:10:05.874873    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-632600 ).state
	I1213 00:10:08.136463    2416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:10:08.136550    2416 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:10:08.136550    2416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-632600 ).networkadapters[0]).ipaddresses[0]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-12-13 00:01:25 UTC, ends at Wed 2023-12-13 00:11:58 UTC. --
	Dec 13 00:08:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:08:58 cert-expiration-764000 dockerd[7577]: time="2023-12-13T00:08:58.210551264Z" level=info msg="Starting up"
	Dec 13 00:09:58 cert-expiration-764000 dockerd[7577]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:09:58 cert-expiration-764000 cri-dockerd[1213]: time="2023-12-13T00:09:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:09:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:09:58 cert-expiration-764000 dockerd[7715]: time="2023-12-13T00:09:58.471300234Z" level=info msg="Starting up"
	Dec 13 00:10:58 cert-expiration-764000 dockerd[7715]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:10:58 cert-expiration-764000 cri-dockerd[1213]: time="2023-12-13T00:10:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:10:58 cert-expiration-764000 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:10:58 cert-expiration-764000 dockerd[7946]: time="2023-12-13T00:10:58.845368253Z" level=info msg="Starting up"
	Dec 13 00:11:58 cert-expiration-764000 dockerd[7946]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 00:11:58 cert-expiration-764000 cri-dockerd[1213]: time="2023-12-13T00:11:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Dec 13 00:11:58 cert-expiration-764000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:11:58 cert-expiration-764000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:11:58 cert-expiration-764000 systemd[1]: Failed to start Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +8.514474] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 00:02] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.161175] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[ +30.360820] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.623360] systemd-fstab-generator[976]: Ignoring "noauto" for root device
	[  +0.167506] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.197594] systemd-fstab-generator[1000]: Ignoring "noauto" for root device
	[  +1.333353] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.352553] systemd-fstab-generator[1158]: Ignoring "noauto" for root device
	[  +0.178325] systemd-fstab-generator[1169]: Ignoring "noauto" for root device
	[  +0.187336] systemd-fstab-generator[1180]: Ignoring "noauto" for root device
	[  +0.179107] systemd-fstab-generator[1191]: Ignoring "noauto" for root device
	[  +0.228781] systemd-fstab-generator[1205]: Ignoring "noauto" for root device
	[Dec13 00:03] systemd-fstab-generator[1310]: Ignoring "noauto" for root device
	[  +2.526741] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.888212] systemd-fstab-generator[1693]: Ignoring "noauto" for root device
	[  +0.747753] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.124488] systemd-fstab-generator[2783]: Ignoring "noauto" for root device
	[Dec13 00:04] kauditd_printk_skb: 19 callbacks suppressed
	[Dec13 00:08] systemd-fstab-generator[7051]: Ignoring "noauto" for root device
	[  +0.823157] systemd-fstab-generator[7096]: Ignoring "noauto" for root device
	[  +0.293461] systemd-fstab-generator[7107]: Ignoring "noauto" for root device
	[  +0.424545] systemd-fstab-generator[7120]: Ignoring "noauto" for root device
	
	* 
	* ==> kernel <==
	*  00:12:59 up 11 min,  0 users,  load average: 0.01, 0.22, 0.20
	Linux cert-expiration-764000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:01:25 UTC, ends at Wed 2023-12-13 00:12:59 UTC. --
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.008191    2803 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-764000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-764000?resourceVersion=0&timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused"
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.008762    2803 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-764000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-764000?timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused"
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.009311    2803 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-764000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-764000?timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused"
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.009952    2803 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-764000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-764000?timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused"
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.010350    2803 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"cert-expiration-764000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-764000?timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused"
	Dec 13 00:12:52 cert-expiration-764000 kubelet[2803]: E1213 00:12:52.010469    2803 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Dec 13 00:12:53 cert-expiration-764000 kubelet[2803]: E1213 00:12:53.686852    2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-764000?timeout=10s\": dial tcp 172.30.59.225:8443: connect: connection refused" interval="7s"
	Dec 13 00:12:56 cert-expiration-764000 kubelet[2803]: E1213 00:12:56.965266    2803 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m10.978553931s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Dec 13 00:12:57 cert-expiration-764000 kubelet[2803]: E1213 00:12:57.579691    2803 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-cert-expiration-764000.17a03b8bf4417839", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-cert-expiration-764000", UID:"be0689f57ca61de1195399bee20b6aae", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Unhealthy", Message:"Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\":
dial tcp 127.0.0.1:2381: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"cert-expiration-764000"}, FirstTimestamp:time.Date(2023, time.December, 13, 0, 8, 51, 430471737, time.Local), LastTimestamp:time.Date(2023, time.December, 13, 0, 8, 51, 430471737, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"cert-expiration-764000"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 172.30.59.225:8443: connect: connection refused'(may retry after sleeping)
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.126436    2803 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.126543    2803 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.126615    2803 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.127839    2803 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.127875    2803 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128184    2803 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128279    2803 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128304    2803 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: I1213 00:12:59.128387    2803 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128494    2803 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128574    2803 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128824    2803 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.128904    2803 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.129567    2803 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.129801    2803 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Dec 13 00:12:59 cert-expiration-764000 kubelet[2803]: E1213 00:12:59.131045    2803 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:10:11.469927    6044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1213 00:10:58.484749    6044 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.524360    6044 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.562340    6044 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.600711    6044 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.641220    6044 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.682797    6044 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:10:58.727860    6044 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:11:58.859423    6044 logs.go:281] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1213 00:12:59.128539    6044 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-13T00:12:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-12-13T00:12:01Z\" level=fatal msg=\"validate service connection: validate CRI v1 runtime API for endpoint \\\"unix:///var/run/cri-dockerd.sock\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E1213 00:12:59.239818    6044 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-764000 -n cert-expiration-764000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-764000 -n cert-expiration-764000: exit status 2 (13.6027058s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:12:59.771892    2996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-764000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-764000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-764000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-764000: (1m5.8222079s)
--- FAIL: TestCertExpiration (969.44s)

                                                
                                    
x
+
TestErrorSpam/setup (187.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-471800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 --driver=hyperv
E1212 22:16:22.620034   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.635775   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.651143   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.683264   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.730556   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.823872   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:22.998923   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:23.331976   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:23.977382   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:25.267100   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:27.831447   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:32.962900   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:16:43.205341   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:17:03.696717   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:17:44.664666   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-471800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 --driver=hyperv: (3m7.8404691s)
error_spam_test.go:96: unexpected stderr: "W1212 22:15:45.120645   14084 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-471800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=17761
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-471800 in cluster nospam-471800
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-471800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W1212 22:15:45.120645   14084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (187.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config unset cpus" to be -""- but got *"W1212 22:30:53.280347   14712 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 config get cpus: exit status 14 (266.364ms)

                                                
                                                
** stderr ** 
	W1212 22:30:53.600547    9928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1212 22:30:53.600547    9928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W1212 22:30:53.870908    1248 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config get cpus" to be -""- but got *"W1212 22:30:54.163929    1800 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config unset cpus" to be -""- but got *"W1212 22:30:54.446562   14660 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 config get cpus: exit status 14 (252.3846ms)

                                                
                                                
** stderr ** 
	W1212 22:30:54.726928   14824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-347300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1212 22:30:54.726928   14824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 service --namespace=default --https --url hello-node: exit status 1 (15.0611247s)

                                                
                                                
** stderr ** 
	W1212 22:31:38.445854    8100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-347300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url --format={{.IP}}: exit status 1 (15.0451085s)

                                                
                                                
** stderr ** 
	W1212 22:31:53.533760    9224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url: exit status 1 (15.0495325s)

                                                
                                                
** stderr ** 
	W1212 22:32:08.544967    8112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-347300 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                    
x
+
TestMinikubeProfile (541.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-983800 --driver=hyperv
E1212 22:54:09.414963   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:55:53.174259   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-983800 --driver=hyperv: (3m6.6405699s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-234000 --driver=hyperv
E1212 22:56:22.622330   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:56:25.432447   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:56:53.260293   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:57:16.364702   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p second-234000 --driver=hyperv: exit status 90 (3m26.7805595s)

                                                
                                                
-- stdout --
	* [second-234000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node second-234000 in cluster second-234000
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:56:02.905605    9840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 22:57:06 UTC, ends at Tue 2023-12-12 22:59:29 UTC. --
	Dec 12 22:57:57 second-234000 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.423342208Z" level=info msg="Starting up"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.424183411Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.425406014Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=689
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.456284698Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.481399667Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.481490167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483199872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483381472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483666573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483761173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483862973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484056974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484100674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484220474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484744576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484907876Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484927376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485164677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485274577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485515878Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485657278Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498701414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498806714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498827314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498880314Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498900414Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498966515Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499002515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499124015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499164115Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499180915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499194815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499209715Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499231715Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499245415Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499258115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499278415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499335116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499350216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499362216Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499448516Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499916417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500022118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500059718Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500082918Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500186818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500275218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500353218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500385819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500416319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500429619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500441819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500454119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500468619Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500542319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500667319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500686119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500704619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500718219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500732019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500744619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500757720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500771820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500784420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500795620Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501117621Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501337221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501386521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501406721Z" level=info msg="containerd successfully booted in 0.045960s"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.538065721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.554688067Z" level=info msg="Loading containers: start."
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.754584512Z" level=info msg="Loading containers: done."
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783358190Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783442991Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783491091Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783601891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783626691Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783720391Z" level=info msg="Daemon has completed initialization"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.838417840Z" level=info msg="API listen on [::]:2376"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.838483041Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 22:57:57 second-234000 systemd[1]: Started Docker Application Container Engine.
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.383839140Z" level=info msg="Processing signal 'terminated'"
	Dec 12 22:58:28 second-234000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385681640Z" level=info msg="Daemon shutdown complete"
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385753240Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385821240Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385943340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 22:58:29 second-234000 systemd[1]: docker.service: Succeeded.
	Dec 12 22:58:29 second-234000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 22:58:29 second-234000 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 22:58:29 second-234000 dockerd[1020]: time="2023-12-12T22:58:29.460863840Z" level=info msg="Starting up"
	Dec 12 22:59:29 second-234000 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 22:59:29 second-234000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 22:59:29 second-234000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 22:59:29 second-234000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-windows-amd64.exe start -p second-234000 --driver=hyperv": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-12 22:59:29.5733783 +0000 UTC m=+3327.617098301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p second-234000 -n second-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p second-234000 -n second-234000: exit status 6 (12.0934916s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:59:29.695715    4392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1212 22:59:41.595231    4392 status.go:415] kubeconfig endpoint: extract IP: "second-234000" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-234000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-234000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-234000: (1m1.9529441s)
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-12 23:00:43.6209698 +0000 UTC m=+3401.664356601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p first-983800 -n first-983800
E1212 23:00:53.177248   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p first-983800 -n first-983800: (12.1265867s)
helpers_test.go:244: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p first-983800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p first-983800 logs -n 25: (8.1358965s)
helpers_test.go:252: TestMinikubeProfile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                   |           Profile           |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p functional-347300                     | functional-347300           | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:36 UTC | 12 Dec 23 22:37 UTC |
	| start   | -p image-247600                          | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:37 UTC | 12 Dec 23 22:40 UTC |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:40 UTC | 12 Dec 23 22:40 UTC |
	|         | ./testdata/image-build/test-normal       |                             |                   |         |                     |                     |
	|         | -p image-247600                          |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:40 UTC | 12 Dec 23 22:41 UTC |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                             |                   |         |                     |                     |
	|         | --build-opt=no-cache                     |                             |                   |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                             |                   |         |                     |                     |
	|         | image-247600                             |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:41 UTC | 12 Dec 23 22:41 UTC |
	|         | ./testdata/image-build/test-normal       |                             |                   |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                             |                   |         |                     |                     |
	|         | image-247600                             |                             |                   |         |                     |                     |
	| image   | build -t aaa:latest                      | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:41 UTC | 12 Dec 23 22:41 UTC |
	|         | -f inner/Dockerfile                      |                             |                   |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                             |                   |         |                     |                     |
	|         | -p image-247600                          |                             |                   |         |                     |                     |
	| delete  | -p image-247600                          | image-247600                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:41 UTC | 12 Dec 23 22:42 UTC |
	| start   | -p ingress-addon-legacy-443200           | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:42 UTC | 12 Dec 23 22:45 UTC |
	|         | --kubernetes-version=v1.18.20            |                             |                   |         |                     |                     |
	|         | --memory=4096 --wait=true                |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-443200              | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:45 UTC | 12 Dec 23 22:46 UTC |
	|         | addons enable ingress                    |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-443200              | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:46 UTC | 12 Dec 23 22:46 UTC |
	|         | addons enable ingress-dns                |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |                   |         |                     |                     |
	| ssh     | ingress-addon-legacy-443200              | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:46 UTC | 12 Dec 23 22:47 UTC |
	|         | ssh curl -s http://127.0.0.1/            |                             |                   |         |                     |                     |
	|         | -H 'Host: nginx.example.com'             |                             |                   |         |                     |                     |
	| ip      | ingress-addon-legacy-443200 ip           | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:47 UTC | 12 Dec 23 22:47 UTC |
	| addons  | ingress-addon-legacy-443200              | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:47 UTC | 12 Dec 23 22:47 UTC |
	|         | addons disable ingress-dns               |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |                   |         |                     |                     |
	| addons  | ingress-addon-legacy-443200              | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:47 UTC | 12 Dec 23 22:47 UTC |
	|         | addons disable ingress                   |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |                   |         |                     |                     |
	| delete  | -p ingress-addon-legacy-443200           | ingress-addon-legacy-443200 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:47 UTC | 12 Dec 23 22:48 UTC |
	| start   | -p json-output-323100                    | json-output-323100          | testUser          | v1.32.0 | 12 Dec 23 22:48 UTC | 12 Dec 23 22:51 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	|         | --memory=2200 --wait=true                |                             |                   |         |                     |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| pause   | -p json-output-323100                    | json-output-323100          | testUser          | v1.32.0 | 12 Dec 23 22:51 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| unpause | -p json-output-323100                    | json-output-323100          | testUser          | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| stop    | -p json-output-323100                    | json-output-323100          | testUser          | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser            |                             |                   |         |                     |                     |
	| delete  | -p json-output-323100                    | json-output-323100          | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	| start   | -p json-output-error-287300              | json-output-error-287300    | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC |                     |
	|         | --memory=2200 --output=json              |                             |                   |         |                     |                     |
	|         | --wait=true --driver=fail                |                             |                   |         |                     |                     |
	| delete  | -p json-output-error-287300              | json-output-error-287300    | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	| start   | -p first-983800                          | first-983800                | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:56 UTC |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| start   | -p second-234000                         | second-234000               | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:56 UTC |                     |
	|         | --driver=hyperv                          |                             |                   |         |                     |                     |
	| delete  | -p second-234000                         | second-234000               | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:59 UTC | 12 Dec 23 23:00 UTC |
	|---------|------------------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:56:02
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:56:02.978652    9840 out.go:296] Setting OutFile to fd 856 ...
	I1212 22:56:02.979509    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:56:02.979509    9840 out.go:309] Setting ErrFile to fd 812...
	I1212 22:56:02.979509    9840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:56:03.001651    9840 out.go:303] Setting JSON to false
	I1212 22:56:03.005798    9840 start.go:128] hostinfo: {"hostname":"minikube7","uptime":75360,"bootTime":1702346402,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:56:03.006000    9840 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:56:03.007129    9840 out.go:177] * [second-234000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:56:03.008039    9840 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:56:03.007907    9840 notify.go:220] Checking for updates...
	I1212 22:56:03.009468    9840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:56:03.010291    9840 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:56:03.011001    9840 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:56:03.011359    9840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:56:03.013667    9840 config.go:182] Loaded profile config "first-983800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:56:03.013747    9840 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:56:08.452556    9840 out.go:177] * Using the hyperv driver based on user configuration
	I1212 22:56:08.453609    9840 start.go:298] selected driver: hyperv
	I1212 22:56:08.453609    9840 start.go:902] validating driver "hyperv" against <nil>
	I1212 22:56:08.453609    9840 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:56:08.453928    9840 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:56:08.516967    9840 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1212 22:56:08.518113    9840 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 22:56:08.518113    9840 cni.go:84] Creating CNI manager for ""
	I1212 22:56:08.518113    9840 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:56:08.518113    9840 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:56:08.518113    9840 start_flags.go:323] config:
	{Name:second-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:second-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:56:08.518113    9840 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:56:08.520334    9840 out.go:177] * Starting control plane node second-234000 in cluster second-234000
	I1212 22:56:08.521279    9840 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 22:56:08.521450    9840 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 22:56:08.521528    9840 cache.go:56] Caching tarball of preloaded images
	I1212 22:56:08.522010    9840 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 22:56:08.522010    9840 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 22:56:08.522482    9840 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-234000\config.json ...
	I1212 22:56:08.522745    9840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-234000\config.json: {Name:mkf2452ceebe34a6b41103dabbda685b7e58b8cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:56:08.523462    9840 start.go:365] acquiring machines lock for second-234000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:56:08.523462    9840 start.go:369] acquired machines lock for "second-234000" in 0s
	I1212 22:56:08.524112    9840 start.go:93] Provisioning new machine with config: &{Name:second-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:second-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 22:56:08.524112    9840 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 22:56:08.524851    9840 out.go:204] * Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1212 22:56:08.524851    9840 start.go:159] libmachine.API.Create for "second-234000" (driver="hyperv")
	I1212 22:56:08.524851    9840 client.go:168] LocalClient.Create starting
	I1212 22:56:08.525796    9840 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 22:56:08.525931    9840 main.go:141] libmachine: Decoding PEM data...
	I1212 22:56:08.525931    9840 main.go:141] libmachine: Parsing certificate...
	I1212 22:56:08.525931    9840 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 22:56:08.525931    9840 main.go:141] libmachine: Decoding PEM data...
	I1212 22:56:08.525931    9840 main.go:141] libmachine: Parsing certificate...
	I1212 22:56:08.525931    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 22:56:10.577715    9840 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 22:56:10.577772    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:10.577772    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 22:56:12.327969    9840 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 22:56:12.327969    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:12.327969    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 22:56:13.804521    9840 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 22:56:13.804521    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:13.804620    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 22:56:17.526194    9840 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 22:56:17.526194    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:17.529239    9840 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:56:18.001513    9840 main.go:141] libmachine: Creating SSH key...
	I1212 22:56:18.119521    9840 main.go:141] libmachine: Creating VM...
	I1212 22:56:18.119521    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 22:56:21.040356    9840 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 22:56:21.040482    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:21.040578    9840 main.go:141] libmachine: Using switch "Default Switch"
	I1212 22:56:21.040578    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 22:56:22.827121    9840 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 22:56:22.827121    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:22.827121    9840 main.go:141] libmachine: Creating VHD
	I1212 22:56:22.827121    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 22:56:26.562765    9840 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F909C6D3-C643-4EE7-AF09-C9A0D2202E35
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 22:56:26.562765    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:26.562765    9840 main.go:141] libmachine: Writing magic tar header
	I1212 22:56:26.562852    9840 main.go:141] libmachine: Writing SSH key tar header
	I1212 22:56:26.574132    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 22:56:29.784738    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:29.784738    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:29.784738    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\disk.vhd' -SizeBytes 20000MB
	I1212 22:56:32.304200    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:32.304200    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:32.304200    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM second-234000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000' -SwitchName 'Default Switch' -MemoryStartupBytes 6000MB
	I1212 22:56:35.843183    9840 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	second-234000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 22:56:35.843183    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:35.843183    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName second-234000 -DynamicMemoryEnabled $false
	I1212 22:56:38.113563    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:38.113785    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:38.113785    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor second-234000 -Count 2
	I1212 22:56:40.297586    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:40.297586    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:40.297586    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName second-234000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\boot2docker.iso'
	I1212 22:56:42.851590    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:42.851590    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:42.851590    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName second-234000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\disk.vhd'
	I1212 22:56:45.470139    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:45.470209    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:45.470209    9840 main.go:141] libmachine: Starting VM...
	I1212 22:56:45.470209    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM second-234000
	I1212 22:56:48.407141    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:48.407141    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:48.407141    9840 main.go:141] libmachine: Waiting for host to start...
	I1212 22:56:48.407141    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:56:50.722994    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:56:50.722994    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:50.722994    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:56:53.306877    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:53.306877    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:54.310498    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:56:56.521193    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:56:56.521193    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:56:56.521488    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:56:59.057328    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:56:59.057328    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:00.073264    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:02.265628    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:02.265852    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:02.265912    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:04.816683    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:57:04.816884    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:05.819514    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:08.057106    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:08.057106    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:08.057106    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:10.660603    9840 main.go:141] libmachine: [stdout =====>] : 
	I1212 22:57:10.660603    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:11.661321    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:13.927086    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:13.927372    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:13.927372    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:16.459639    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:16.459639    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:16.459822    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:18.590740    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:18.590740    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:18.590898    9840 machine.go:88] provisioning docker machine ...
	I1212 22:57:18.590898    9840 buildroot.go:166] provisioning hostname "second-234000"
	I1212 22:57:18.590971    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:20.748127    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:20.748127    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:20.748127    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:23.285180    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:23.285180    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:23.292264    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:57:23.301068    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:57:23.301068    9840 main.go:141] libmachine: About to run SSH command:
	sudo hostname second-234000 && echo "second-234000" | sudo tee /etc/hostname
	I1212 22:57:23.453563    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: second-234000
	
	I1212 22:57:23.453563    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:25.550127    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:25.550127    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:25.550372    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:28.086865    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:28.087163    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:28.092748    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:57:28.093462    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:57:28.093462    9840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\ssecond-234000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 second-234000/g' /etc/hosts;
				else 
					echo '127.0.1.1 second-234000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:57:28.227985    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:57:28.227985    9840 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 22:57:28.227985    9840 buildroot.go:174] setting up certificates
	I1212 22:57:28.227985    9840 provision.go:83] configureAuth start
	I1212 22:57:28.227985    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:30.309027    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:30.309027    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:30.309027    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:32.869691    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:32.869691    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:32.869887    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:34.997031    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:34.997031    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:34.997214    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:37.551825    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:37.551825    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:37.552179    9840 provision.go:138] copyHostCerts
	I1212 22:57:37.552647    9840 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 22:57:37.552647    9840 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 22:57:37.554853    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 22:57:37.556171    9840 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 22:57:37.556171    9840 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 22:57:37.556171    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 22:57:37.557753    9840 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 22:57:37.557753    9840 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 22:57:37.558471    9840 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 22:57:37.558471    9840 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.second-234000 san=[172.30.51.161 172.30.51.161 localhost 127.0.0.1 minikube second-234000]
	I1212 22:57:37.711371    9840 provision.go:172] copyRemoteCerts
	I1212 22:57:37.729037    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:57:37.729037    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:39.852022    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:39.852022    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:39.852022    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:42.422323    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:42.422323    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:42.422878    9840 sshutil.go:53] new ssh client: &{IP:172.30.51.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\id_rsa Username:docker}
	I1212 22:57:42.530091    9840 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8010327s)
	I1212 22:57:42.530786    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 22:57:42.570345    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:57:42.610361    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:57:42.647428    9840 provision.go:86] duration metric: configureAuth took 14.4193411s
	I1212 22:57:42.647462    9840 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:57:42.648057    9840 config.go:182] Loaded profile config "second-234000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:57:42.648138    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:44.754260    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:44.754260    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:44.754260    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:47.275741    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:47.275741    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:47.281783    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:57:47.282560    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:57:47.282560    9840 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 22:57:47.408779    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 22:57:47.408779    9840 buildroot.go:70] root file system type: tmpfs
	I1212 22:57:47.409034    9840 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 22:57:47.409141    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:49.528937    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:49.528937    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:49.528937    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:52.019076    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:52.019305    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:52.024830    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:57:52.025597    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:57:52.025597    9840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 22:57:52.172545    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 22:57:52.172545    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:54.326690    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:54.327068    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:54.327068    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:57:56.878813    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:57:56.878813    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:56.883460    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:57:56.884289    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:57:56.884289    9840 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 22:57:57.840919    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 22:57:57.840919    9840 machine.go:91] provisioned docker machine in 39.2498441s
	I1212 22:57:57.840919    9840 client.go:171] LocalClient.Create took 1m49.3155758s
	I1212 22:57:57.840919    9840 start.go:167] duration metric: libmachine.API.Create for "second-234000" took 1m49.3155758s
	I1212 22:57:57.840919    9840 start.go:300] post-start starting for "second-234000" (driver="hyperv")
	I1212 22:57:57.840919    9840 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:57:57.854003    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:57:57.854003    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:57:59.925883    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:57:59.926234    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:57:59.926292    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:02.483229    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:02.483229    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:02.483898    9840 sshutil.go:53] new ssh client: &{IP:172.30.51.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\id_rsa Username:docker}
	I1212 22:58:02.591840    9840 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7378161s)
	I1212 22:58:02.605344    9840 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:58:02.611159    9840 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:58:02.611159    9840 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 22:58:02.611703    9840 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 22:58:02.612939    9840 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 22:58:02.626615    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:58:02.641635    9840 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 22:58:02.681356    9840 start.go:303] post-start completed in 4.8404154s
	I1212 22:58:02.684054    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:04.850410    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:04.850410    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:04.850589    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:07.424174    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:07.424174    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:07.424174    9840 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\second-234000\config.json ...
	I1212 22:58:07.426520    9840 start.go:128] duration metric: createHost completed in 1m58.9018733s
	I1212 22:58:07.427071    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:09.553419    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:09.553419    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:09.553594    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:12.106730    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:12.106730    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:12.112816    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:58:12.113649    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:58:12.113649    9840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:58:12.240734    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702421892.239342224
	
	I1212 22:58:12.240890    9840 fix.go:206] guest clock: 1702421892.239342224
	I1212 22:58:12.240890    9840 fix.go:219] Guest: 2023-12-12 22:58:12.239342224 +0000 UTC Remote: 2023-12-12 22:58:07.4270716 +0000 UTC m=+124.617598101 (delta=4.812270624s)
	I1212 22:58:12.240890    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:14.361229    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:14.361229    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:14.361463    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:16.865364    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:16.865553    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:16.871372    9840 main.go:141] libmachine: Using SSH client type: native
	I1212 22:58:16.872094    9840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.161 22 <nil> <nil>}
	I1212 22:58:16.872094    9840 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702421892
	I1212 22:58:17.007992    9840 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 22:58:12 UTC 2023
	
	I1212 22:58:17.008140    9840 fix.go:226] clock set: Tue Dec 12 22:58:12 UTC 2023
	 (err=<nil>)
	I1212 22:58:17.008140    9840 start.go:83] releasing machines lock for "second-234000", held for 2m8.4841001s
	I1212 22:58:17.008460    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:19.152390    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:19.152390    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:19.152390    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:21.689231    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:21.689231    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:21.693297    9840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:58:21.693373    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:21.705177    9840 ssh_runner.go:195] Run: cat /version.json
	I1212 22:58:21.705177    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM second-234000 ).state
	I1212 22:58:23.908749    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:23.908749    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:23.908749    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:23.924342    9840 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 22:58:23.924342    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:23.924342    9840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM second-234000 ).networkadapters[0]).ipaddresses[0]
	I1212 22:58:26.595646    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:26.595646    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:26.596305    9840 sshutil.go:53] new ssh client: &{IP:172.30.51.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\id_rsa Username:docker}
	I1212 22:58:26.615835    9840 main.go:141] libmachine: [stdout =====>] : 172.30.51.161
	
	I1212 22:58:26.615835    9840 main.go:141] libmachine: [stderr =====>] : 
	I1212 22:58:26.616826    9840 sshutil.go:53] new ssh client: &{IP:172.30.51.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\second-234000\id_rsa Username:docker}
	I1212 22:58:26.820409    9840 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1270895s)
	I1212 22:58:26.820409    9840 ssh_runner.go:235] Completed: cat /version.json: (5.1152091s)
	I1212 22:58:26.834992    9840 ssh_runner.go:195] Run: systemctl --version
	I1212 22:58:26.855703    9840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 22:58:26.861943    9840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:58:26.874721    9840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:58:26.897845    9840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:58:26.897917    9840 start.go:475] detecting cgroup driver to use...
	I1212 22:58:26.898173    9840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:58:26.941663    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 22:58:26.971956    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 22:58:26.988404    9840 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 22:58:27.005357    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 22:58:27.034612    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 22:58:27.063524    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 22:58:27.095905    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 22:58:27.126992    9840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:58:27.156294    9840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 22:58:27.189505    9840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:58:27.225069    9840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:58:27.257283    9840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:58:27.420429    9840 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 22:58:27.444450    9840 start.go:475] detecting cgroup driver to use...
	I1212 22:58:27.457451    9840 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 22:58:27.497037    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:58:27.529337    9840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:58:27.573364    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:58:27.605015    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 22:58:27.638183    9840 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 22:58:27.688339    9840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 22:58:27.705765    9840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:58:27.747291    9840 ssh_runner.go:195] Run: which cri-dockerd
	I1212 22:58:27.766620    9840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 22:58:27.779357    9840 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 22:58:27.818759    9840 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 22:58:27.984597    9840 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 22:58:28.135058    9840 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 22:58:28.135318    9840 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 22:58:28.188398    9840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:58:28.363831    9840 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 22:59:29.474073    9840 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1089595s)
	I1212 22:59:29.488207    9840 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 22:59:29.514648    9840 out.go:177] 
	W1212 22:59:29.515556    9840 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 22:57:06 UTC, ends at Tue 2023-12-12 22:59:29 UTC. --
	Dec 12 22:57:57 second-234000 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.423342208Z" level=info msg="Starting up"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.424183411Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.425406014Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=689
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.456284698Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.481399667Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.481490167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483199872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483381472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483666573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483761173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.483862973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484056974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484100674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484220474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484744576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484907876Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.484927376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485164677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485274577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485515878Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.485657278Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498701414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498806714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498827314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498880314Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498900414Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.498966515Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499002515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499124015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499164115Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499180915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499194815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499209715Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499231715Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499245415Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499258115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499278415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499335116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499350216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499362216Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499448516Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.499916417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500022118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500059718Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500082918Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500186818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500275218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500353218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500385819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500416319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500429619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500441819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500454119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500468619Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500542319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500667319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500686119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500704619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500718219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500732019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500744619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500757720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500771820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500784420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.500795620Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501117621Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501337221Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501386521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 22:57:57 second-234000 dockerd[689]: time="2023-12-12T22:57:57.501406721Z" level=info msg="containerd successfully booted in 0.045960s"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.538065721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.554688067Z" level=info msg="Loading containers: start."
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.754584512Z" level=info msg="Loading containers: done."
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783358190Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783442991Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783491091Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783601891Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783626691Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.783720391Z" level=info msg="Daemon has completed initialization"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.838417840Z" level=info msg="API listen on [::]:2376"
	Dec 12 22:57:57 second-234000 dockerd[683]: time="2023-12-12T22:57:57.838483041Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 22:57:57 second-234000 systemd[1]: Started Docker Application Container Engine.
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.383839140Z" level=info msg="Processing signal 'terminated'"
	Dec 12 22:58:28 second-234000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385681640Z" level=info msg="Daemon shutdown complete"
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385753240Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385821240Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 22:58:28 second-234000 dockerd[683]: time="2023-12-12T22:58:28.385943340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 22:58:29 second-234000 systemd[1]: docker.service: Succeeded.
	Dec 12 22:58:29 second-234000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 22:58:29 second-234000 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 22:58:29 second-234000 dockerd[1020]: time="2023-12-12T22:58:29.460863840Z" level=info msg="Starting up"
	Dec 12 22:59:29 second-234000 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 22:59:29 second-234000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 22:59:29 second-234000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 22:59:29 second-234000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 22:59:29.515556    9840 out.go:239] * 
	W1212 22:59:29.517614    9840 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 22:59:29.518235    9840 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 22:53:58 UTC, ends at Tue 2023-12-12 23:01:03 UTC. --
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.156775636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.156856136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.230938742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.231242141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.231331941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.231379141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:05 first-983800 cri-dockerd[1222]: time="2023-12-12T22:56:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/456cb5d8be2fcdc2ab4fc17fadb652b88fb418dab5553441517244fdf5b183ab/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.576428700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.576906499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.577021899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:56:05 first-983800 dockerd[1337]: time="2023-12-12T22:56:05.577203799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:05 first-983800 cri-dockerd[1222]: time="2023-12-12T22:56:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52d040ecf07c0226f05d7a10780636650d255a24149cba731bdc913aad8b541d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 22:56:06 first-983800 dockerd[1337]: time="2023-12-12T22:56:06.181826740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:56:06 first-983800 dockerd[1337]: time="2023-12-12T22:56:06.181916140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:06 first-983800 dockerd[1337]: time="2023-12-12T22:56:06.181998940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:56:06 first-983800 dockerd[1337]: time="2023-12-12T22:56:06.182018040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:12 first-983800 cri-dockerd[1222]: time="2023-12-12T22:56:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 12 22:56:35 first-983800 dockerd[1331]: time="2023-12-12T22:56:35.753521884Z" level=info msg="ignoring event" container=f946162dc1aa5925c2ccd2b8f9c8afb49b38516d2cb689441e3e1b1e629f0523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 12 22:56:35 first-983800 dockerd[1337]: time="2023-12-12T22:56:35.755722284Z" level=info msg="shim disconnected" id=f946162dc1aa5925c2ccd2b8f9c8afb49b38516d2cb689441e3e1b1e629f0523 namespace=moby
	Dec 12 22:56:35 first-983800 dockerd[1337]: time="2023-12-12T22:56:35.756021584Z" level=warning msg="cleaning up after shim disconnected" id=f946162dc1aa5925c2ccd2b8f9c8afb49b38516d2cb689441e3e1b1e629f0523 namespace=moby
	Dec 12 22:56:35 first-983800 dockerd[1337]: time="2023-12-12T22:56:35.756459483Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 22:56:36 first-983800 dockerd[1337]: time="2023-12-12T22:56:36.875210187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 22:56:36 first-983800 dockerd[1337]: time="2023-12-12T22:56:36.875376487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 22:56:36 first-983800 dockerd[1337]: time="2023-12-12T22:56:36.875711087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 22:56:36 first-983800 dockerd[1337]: time="2023-12-12T22:56:36.875741187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b73dae6d07dfe       6e38f40d628db       4 minutes ago       Running             storage-provisioner       1                   456cb5d8be2fc       storage-provisioner
	dc35c02c03160       ead0a4a53df89       4 minutes ago       Running             coredns                   0                   52d040ecf07c0       coredns-5dd5756b68-lwhfh
	f946162dc1aa5       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   456cb5d8be2fc       storage-provisioner
	af6471959f140       83f6cc407eed8       4 minutes ago       Running             kube-proxy                0                   f7a3b725f55e8       kube-proxy-zh6sv
	16abb7ad74410       e3db313c6dbc0       5 minutes ago       Running             kube-scheduler            0                   f4895e2151347       kube-scheduler-first-983800
	12ddc1feaf0f6       73deb9a3f7025       5 minutes ago       Running             etcd                      0                   4ac07219d9c4b       etcd-first-983800
	d3a5e0d6313bf       d058aa5ab969c       5 minutes ago       Running             kube-controller-manager   0                   827d635b1af89       kube-controller-manager-first-983800
	c2fe330d724bf       7fe0e6f37db33       5 minutes ago       Running             kube-apiserver            0                   70827cd9fde5f       kube-apiserver-first-983800
	
	* 
	* ==> coredns [dc35c02c0316] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44521 - 28495 "HINFO IN 5817551940622434455.5686291993422746486. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062217825s
	
	* 
	* ==> describe nodes <==
	* Name:               first-983800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=first-983800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=first-983800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_55_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:55:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  first-983800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:01:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:56:12 +0000   Tue, 12 Dec 2023 22:55:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:56:12 +0000   Tue, 12 Dec 2023 22:55:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:56:12 +0000   Tue, 12 Dec 2023 22:55:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:56:12 +0000   Tue, 12 Dec 2023 22:55:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.58.217
	  Hostname:    first-983800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	System Info:
	  Machine ID:                 7053e397e520481d8421b85eebd15d35
	  System UUID:                3b541769-a4cf-bc4f-ad53-c69f71fa4430
	  Boot ID:                    5dc21469-22a7-4bdb-a59c-1efb550c7336
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-lwhfh                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     4m59s
	  kube-system                 etcd-first-983800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-apiserver-first-983800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-first-983800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-proxy-zh6sv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-first-983800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node first-983800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node first-983800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node first-983800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s                  kubelet          Node first-983800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s                  kubelet          Node first-983800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s                  kubelet          Node first-983800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m6s                   kubelet          Node first-983800 status is now: NodeReady
	  Normal  RegisteredNode           5m                     node-controller  Node first-983800 event: Registered Node first-983800 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.015172] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.953490] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.395091] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.117347] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec12 22:54] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +42.453546] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.139167] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[Dec12 22:55] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.580876] systemd-fstab-generator[985]: Ignoring "noauto" for root device
	[  +0.170459] systemd-fstab-generator[996]: Ignoring "noauto" for root device
	[  +0.186686] systemd-fstab-generator[1009]: Ignoring "noauto" for root device
	[  +1.326964] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324555] systemd-fstab-generator[1167]: Ignoring "noauto" for root device
	[  +0.165525] systemd-fstab-generator[1178]: Ignoring "noauto" for root device
	[  +0.162986] systemd-fstab-generator[1189]: Ignoring "noauto" for root device
	[  +0.153925] systemd-fstab-generator[1200]: Ignoring "noauto" for root device
	[  +0.204576] systemd-fstab-generator[1214]: Ignoring "noauto" for root device
	[  +7.056784] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +8.668837] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.511518] systemd-fstab-generator[1707]: Ignoring "noauto" for root device
	[  +0.536066] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.303120] systemd-fstab-generator[2640]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [12ddc1feaf0f] <==
	* {"level":"info","ts":"2023-12-12T22:55:46.030464Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T22:55:46.031182Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T22:55:46.031368Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T22:55:46.029963Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"5e1db4017b9bd145","initial-advertise-peer-urls":["https://172.30.58.217:2380"],"listen-peer-urls":["https://172.30.58.217:2380"],"advertise-client-urls":["https://172.30.58.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.58.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T22:55:46.029994Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T22:55:46.033962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 switched to configuration voters=(6781774532351611205)"}
	{"level":"info","ts":"2023-12-12T22:55:46.035937Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"849439611254c330","local-member-id":"5e1db4017b9bd145","added-peer-id":"5e1db4017b9bd145","added-peer-peer-urls":["https://172.30.58.217:2380"]}
	{"level":"info","ts":"2023-12-12T22:55:46.386836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T22:55:46.387156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T22:55:46.387453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 received MsgPreVoteResp from 5e1db4017b9bd145 at term 1"}
	{"level":"info","ts":"2023-12-12T22:55:46.387765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T22:55:46.387907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 received MsgVoteResp from 5e1db4017b9bd145 at term 2"}
	{"level":"info","ts":"2023-12-12T22:55:46.388146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e1db4017b9bd145 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T22:55:46.388345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5e1db4017b9bd145 elected leader 5e1db4017b9bd145 at term 2"}
	{"level":"info","ts":"2023-12-12T22:55:46.392463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"5e1db4017b9bd145","local-member-attributes":"{Name:first-983800 ClientURLs:[https://172.30.58.217:2379]}","request-path":"/0/members/5e1db4017b9bd145/attributes","cluster-id":"849439611254c330","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T22:55:46.393015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:55:46.396423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T22:55:46.396821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:55:46.398199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.58.217:2379"}
	{"level":"info","ts":"2023-12-12T22:55:46.401134Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:55:46.401248Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T22:55:46.430784Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T22:55:46.47515Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"849439611254c330","local-member-id":"5e1db4017b9bd145","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:55:46.475506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:55:46.475833Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  23:01:03 up 7 min,  0 users,  load average: 0.19, 0.42, 0.25
	Linux first-983800 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c2fe330d724b] <==
	* I1212 22:55:48.708322       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 22:55:48.709161       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 22:55:48.710917       1 aggregator.go:166] initial CRD sync complete...
	I1212 22:55:48.711151       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 22:55:48.711476       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 22:55:48.713421       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:55:48.713875       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 22:55:48.719662       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 22:55:48.751870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:55:48.787048       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 22:55:49.621235       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 22:55:49.632277       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 22:55:49.632519       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 22:55:50.443238       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:55:50.500523       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 22:55:50.662028       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 22:55:50.671750       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.58.217]
	I1212 22:55:50.673277       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 22:55:50.678828       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 22:55:50.718186       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 22:55:52.222100       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 22:55:52.236368       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 22:55:52.247704       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 22:56:04.274828       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 22:56:04.429334       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [d3a5e0d6313b] <==
	* I1212 22:56:03.587767       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 22:56:03.589192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 22:56:03.589214       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 22:56:03.590028       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1212 22:56:03.592688       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 22:56:03.611174       1 shared_informer.go:318] Caches are synced for HPA
	I1212 22:56:03.615550       1 shared_informer.go:318] Caches are synced for cronjob
	I1212 22:56:03.622648       1 shared_informer.go:318] Caches are synced for job
	I1212 22:56:03.686515       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 22:56:03.689412       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 22:56:03.723185       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 22:56:03.733134       1 shared_informer.go:318] Caches are synced for endpoint
	I1212 22:56:04.144787       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 22:56:04.175240       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 22:56:04.175711       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 22:56:04.281052       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I1212 22:56:04.447630       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zh6sv"
	I1212 22:56:04.579659       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lwhfh"
	I1212 22:56:04.603983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="323.742558ms"
	I1212 22:56:04.634043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.020159ms"
	I1212 22:56:04.635277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.152199ms"
	I1212 22:56:04.635349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.2µs"
	I1212 22:56:06.494671       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.9µs"
	I1212 22:56:06.546033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.388264ms"
	I1212 22:56:06.550126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.004896ms"
	
	* 
	* ==> kube-proxy [af6471959f14] <==
	* I1212 22:56:05.444900       1 server_others.go:69] "Using iptables proxy"
	I1212 22:56:05.468854       1 node.go:141] Successfully retrieved node IP: 172.30.58.217
	I1212 22:56:05.764134       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:56:05.764163       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:56:05.771570       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:56:05.771828       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:56:05.772501       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:56:05.772917       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:56:05.774577       1 config.go:188] "Starting service config controller"
	I1212 22:56:05.774769       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:56:05.775027       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:56:05.775374       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:56:05.778846       1 config.go:315] "Starting node config controller"
	I1212 22:56:05.779119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:56:05.876017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 22:56:05.876482       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:56:05.879741       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [16abb7ad7441] <==
	* W1212 22:55:49.761998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:55:49.762044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:55:49.763169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:55:49.763207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:55:49.909288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:55:49.909351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:55:49.934581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:55:49.934656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:55:49.979178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:55:49.979218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:55:50.014223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:55:50.014431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 22:55:50.017583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:55:50.017776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 22:55:50.103405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:55:50.103456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:55:50.136996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:55:50.137028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 22:55:50.185146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:55:50.185357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:55:50.185633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:55:50.185704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:55:50.359245       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:55:50.359293       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 22:55:52.943434       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:53:58 UTC, ends at Tue 2023-12-12 23:01:03 UTC. --
	Dec 12 22:56:06 first-983800 kubelet[2661]: I1212 22:56:06.580295    2661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zh6sv" podStartSLOduration=2.580253962 podCreationTimestamp="2023-12-12 22:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:56:06.558186688 +0000 UTC m=+14.372721707" watchObservedRunningTime="2023-12-12 22:56:06.580253962 +0000 UTC m=+14.394788881"
	Dec 12 22:56:12 first-983800 kubelet[2661]: I1212 22:56:12.974163    2661 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 22:56:12 first-983800 kubelet[2661]: I1212 22:56:12.975551    2661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 22:56:36 first-983800 kubelet[2661]: I1212 22:56:36.764712    2661 scope.go:117] "RemoveContainer" containerID="f946162dc1aa5925c2ccd2b8f9c8afb49b38516d2cb689441e3e1b1e629f0523"
	Dec 12 22:56:37 first-983800 kubelet[2661]: I1212 22:56:37.799194    2661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=37.799023236 podCreationTimestamp="2023-12-12 22:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:56:06.58188236 +0000 UTC m=+14.396417379" watchObservedRunningTime="2023-12-12 22:56:37.799023236 +0000 UTC m=+45.613558255"
	Dec 12 22:56:52 first-983800 kubelet[2661]: E1212 22:56:52.590373    2661 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:56:52 first-983800 kubelet[2661]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:56:52 first-983800 kubelet[2661]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:56:52 first-983800 kubelet[2661]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:57:52 first-983800 kubelet[2661]: E1212 22:57:52.589930    2661 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:57:52 first-983800 kubelet[2661]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:57:52 first-983800 kubelet[2661]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:57:52 first-983800 kubelet[2661]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:58:52 first-983800 kubelet[2661]: E1212 22:58:52.586223    2661 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:58:52 first-983800 kubelet[2661]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:58:52 first-983800 kubelet[2661]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:58:52 first-983800 kubelet[2661]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:59:52 first-983800 kubelet[2661]: E1212 22:59:52.588889    2661 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:59:52 first-983800 kubelet[2661]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:59:52 first-983800 kubelet[2661]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:59:52 first-983800 kubelet[2661]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:00:52 first-983800 kubelet[2661]: E1212 23:00:52.587227    2661 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:00:52 first-983800 kubelet[2661]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:00:52 first-983800 kubelet[2661]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:00:52 first-983800 kubelet[2661]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [b73dae6d07df] <==
	* I1212 22:56:36.975951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:56:36.996194       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:56:36.996320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:56:37.009209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:56:37.009683       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_first-983800_3645ea3e-228e-433b-a92e-4d5bc86e3743!
	I1212 22:56:37.009500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a94d5552-7a57-4792-a970-6db8f8708d85", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' first-983800_3645ea3e-228e-433b-a92e-4d5bc86e3743 became leader
	I1212 22:56:37.110636       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_first-983800_3645ea3e-228e-433b-a92e-4d5bc86e3743!
	
	* 
	* ==> storage-provisioner [f946162dc1aa] <==
	* I1212 22:56:05.720454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 22:56:35.725562       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:00:55.868219    4732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p first-983800 -n first-983800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p first-983800 -n first-983800: (11.9980558s)
helpers_test.go:261: (dbg) Run:  kubectl --context first-983800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMinikubeProfile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "first-983800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-983800
E1212 23:01:22.630609   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:01:25.435696   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-983800: (41.5988169s)
--- FAIL: TestMinikubeProfile (541.90s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (445.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-392000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E1212 23:13:56.378519   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:15:53.172828   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:16:22.639889   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:16:25.440154   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-392000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (6m50.4039677s)

                                                
                                                
-- stdout --
	* [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node multinode-392000 in cluster multinode-392000
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting worker node multinode-392000-m02 in cluster multinode-392000
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.30.51.245
	  - NO_PROXY=172.30.51.245
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:11:29.992758    8472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	* 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-392000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (12.1688681s)
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.4704935s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                    |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p json-output-323100                     | json-output-323100       | testUser          | v1.32.0 | 12 Dec 23 22:51 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| unpause | -p json-output-323100                     | json-output-323100       | testUser          | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| stop    | -p json-output-323100                     | json-output-323100       | testUser          | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| delete  | -p json-output-323100                     | json-output-323100       | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	| start   | -p json-output-error-287300               | json-output-error-287300 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC |                     |
	|         | --memory=2200 --output=json               |                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                 |                          |                   |         |                     |                     |
	| delete  | -p json-output-error-287300               | json-output-error-287300 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:52 UTC |
	| start   | -p first-983800                           | first-983800             | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:52 UTC | 12 Dec 23 22:56 UTC |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| start   | -p second-234000                          | second-234000            | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:56 UTC |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| delete  | -p second-234000                          | second-234000            | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:59 UTC | 12 Dec 23 23:00 UTC |
	| delete  | -p first-983800                           | first-983800             | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:01 UTC | 12 Dec 23 23:01 UTC |
	| start   | -p mount-start-1-459600                   | mount-start-1-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:01 UTC | 12 Dec 23 23:04 UTC |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host | mount-start-1-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:04 UTC |                     |
	|         | --profile mount-start-1-459600 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-1-459600 ssh -- ls            | mount-start-1-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:04 UTC | 12 Dec 23 23:04 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| start   | -p mount-start-2-459600                   | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:04 UTC | 12 Dec 23 23:07 UTC |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:07 UTC |                     |
	|         | --profile mount-start-2-459600 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-459600 ssh -- ls            | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:07 UTC | 12 Dec 23 23:07 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-1-459600                   | mount-start-1-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:07 UTC | 12 Dec 23 23:08 UTC |
	|         | --alsologtostderr -v=5                    |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-459600 ssh -- ls            | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| stop    | -p mount-start-2-459600                   | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	| start   | -p mount-start-2-459600                   | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:10 UTC |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:10 UTC |                     |
	|         | --profile mount-start-2-459600 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-459600 ssh -- ls            | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:11 UTC |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-2-459600                   | mount-start-2-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:11 UTC |
	| delete  | -p mount-start-1-459600                   | mount-start-1-459600     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:11 UTC |
	| start   | -p multinode-392000                       | multinode-392000         | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --wait=true --memory=2200                 |                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                            |                          |                   |         |                     |                     |
	|         | --alsologtostderr                         |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:18:41 UTC. --
	Dec 12 23:14:40 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:40.283223085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:44 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13c6e0fbb4c87c25665429c729fa4fb18695bac595a0626dd72b4e4603498987/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:49 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:49Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Dec 12 23:14:50 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:50.052223934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:50 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:50.052396733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:50 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:50.052446732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:50 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:50.053325225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282283321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282391320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282424620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d33bb583a4c67       ead0a4a53df89                                                                              3 minutes ago       Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                              3 minutes ago       Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   3 minutes ago       Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                              4 minutes ago       Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                              4 minutes ago       Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                              4 minutes ago       Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                              4 minutes ago       Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                              4 minutes ago       Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:18:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:14:58 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:14:58 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:14:58 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:14:58 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m2s
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 4m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m24s (x8 over 4m24s)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x8 over 4m24s)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x7 over 4m24s)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node multinode-392000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.234779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 switched to configuration voters=(10664302421299840929)"}
	{"level":"info","ts":"2023-12-12T23:14:20.239698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","added-peer-id":"93ff368cdeea47a1","added-peer-peer-urls":["https://172.30.51.245:2380"]}
	{"level":"info","ts":"2023-12-12T23:14:20.240085Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:14:20.240296Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.51.245:2380"}
	{"level":"info","ts":"2023-12-12T23:14:20.240318Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.51.245:2380"}
	{"level":"info","ts":"2023-12-12T23:14:20.245846Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:14:20.245805Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"93ff368cdeea47a1","initial-advertise-peer-urls":["https://172.30.51.245:2380"],"listen-peer-urls":["https://172.30.51.245:2380"],"advertise-client-urls":["https://172.30.51.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.51.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:14:20.357692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:18:41 up 6 min,  0 users,  load average: 0.19, 0.35, 0.19
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:16:41.100368       1 main.go:227] handling current node
	I1212 23:16:51.106430       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:16:51.106641       1 main.go:227] handling current node
	I1212 23:17:01.112793       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:01.112894       1 main.go:227] handling current node
	I1212 23:17:11.126941       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:11.126971       1 main.go:227] handling current node
	I1212 23:17:21.133285       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:21.133443       1 main.go:227] handling current node
	I1212 23:17:31.147343       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:31.147431       1 main.go:227] handling current node
	I1212 23:17:41.154804       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:41.154893       1 main.go:227] handling current node
	I1212 23:17:51.168954       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:17:51.169044       1 main.go:227] handling current node
	I1212 23:18:01.175628       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:18:01.175815       1 main.go:227] handling current node
	I1212 23:18:11.190938       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:18:11.191121       1 main.go:227] handling current node
	I1212 23:18:21.204246       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:18:21.204365       1 main.go:227] handling current node
	I1212 23:18:31.219177       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:18:31.219342       1 main.go:227] handling current node
	I1212 23:18:41.231731       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:18:41.231771       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:38.556349       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:14:38.763046       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 23:14:38.868947       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:14:38.913333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:14:38.913358       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:14:39.118687       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55nr8"
	I1212 23:14:39.128528       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bpcxd"
	I1212 23:14:39.372303       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5g8ks"
	I1212 23:14:39.385762       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4xn8h"
	I1212 23:14:39.402470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="640.526163ms"
	I1212 23:14:39.423878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.350638ms"
	I1212 23:14:39.455212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.288269ms"
	I1212 23:14:39.455353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.7µs"
	I1212 23:14:39.653487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:14:39.680197       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-5g8ks"
	I1212 23:14:39.711806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.664787ms"
	I1212 23:14:39.734721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.862413ms"
	I1212 23:14:39.785084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.307746ms"
	I1212 23:14:39.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.699µs"
	I1212 23:14:55.812545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.499µs"
	I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:18:42 UTC. --
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.805756    2682 topology_manager.go:215] "Topology Admit Handler" podUID="17b97a16-eb8e-4bb4-a224-baa68e4c5efe" podNamespace="kube-system" podName="coredns-5dd5756b68-4xn8h"
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.808469    2682 topology_manager.go:215] "Topology Admit Handler" podUID="0a8f47d8-719b-4927-a11d-e796c2d01064" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.814564    2682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b97a16-eb8e-4bb4-a224-baa68e4c5efe-config-volume\") pod \"coredns-5dd5756b68-4xn8h\" (UID: \"17b97a16-eb8e-4bb4-a224-baa68e4c5efe\") " pod="kube-system/coredns-5dd5756b68-4xn8h"
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.818353    2682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0a8f47d8-719b-4927-a11d-e796c2d01064-tmp\") pod \"storage-provisioner\" (UID: \"0a8f47d8-719b-4927-a11d-e796c2d01064\") " pod="kube-system/storage-provisioner"
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.818490    2682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcxc\" (UniqueName: \"kubernetes.io/projected/17b97a16-eb8e-4bb4-a224-baa68e4c5efe-kube-api-access-kkcxc\") pod \"coredns-5dd5756b68-4xn8h\" (UID: \"17b97a16-eb8e-4bb4-a224-baa68e4c5efe\") " pod="kube-system/coredns-5dd5756b68-4xn8h"
	Dec 12 23:14:55 multinode-392000 kubelet[2682]: I1212 23:14:55.818558    2682 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58n2l\" (UniqueName: \"kubernetes.io/projected/0a8f47d8-719b-4927-a11d-e796c2d01064-kube-api-access-58n2l\") pod \"storage-provisioner\" (UID: \"0a8f47d8-719b-4927-a11d-e796c2d01064\") " pod="kube-system/storage-provisioner"
	Dec 12 23:14:56 multinode-392000 kubelet[2682]: I1212 23:14:56.898818    2682 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c"
	Dec 12 23:14:56 multinode-392000 kubelet[2682]: I1212 23:14:56.908107    2682 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d"
	Dec 12 23:14:57 multinode-392000 kubelet[2682]: I1212 23:14:57.972127    2682 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4xn8h" podStartSLOduration=18.972084195 podCreationTimestamp="2023-12-12 23:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:14:57.950698509 +0000 UTC m=+31.312686781" watchObservedRunningTime="2023-12-12 23:14:57.972084195 +0000 UTC m=+31.334072367"
	Dec 12 23:15:27 multinode-392000 kubelet[2682]: E1212 23:15:27.001847    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:15:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:15:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:15:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:16:27 multinode-392000 kubelet[2682]: E1212 23:16:27.004655    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:16:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:16:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:16:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:17:27 multinode-392000 kubelet[2682]: E1212 23:17:27.002188    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:17:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:17:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:17:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:18:27 multinode-392000 kubelet[2682]: E1212 23:18:27.002220    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:18:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:18:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:18:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [f6b34e581fc6] <==
	* I1212 23:14:57.324469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:14:57.354186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:14:57.354226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:14:57.375032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:14:57.377324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	I1212 23:14:57.379047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"843046f3-0fcd-4f8f-8bbf-0d83d2c229ac", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1 became leader
	I1212 23:14:57.478231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:18:33.818287    4888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (12.0137336s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (445.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (751.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- rollout status deployment/busybox
E1212 23:20:53.168177   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:21:22.632031   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:21:25.439350   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 23:22:45.856008   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:24:28.641019   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 23:25:53.181256   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:26:22.630581   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:26:25.452211   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- rollout status deployment/busybox: exit status 1 (10m3.6583962s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:18:56.542546    4540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:00.196405   13164 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:01.832927   10576 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:03.366313   10980 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:05.446148   13024 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:09.897202    5472 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:15.341938   13072 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:20.629402   13060 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:31.699501   10480 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:29:47.892227   14460 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:30:13.893712    3068 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E1212 23:30:36.395693   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:30:48.833300    6632 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:540: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW1212 23:30:48.833300    6632 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.io: exit status 1 (421.5035ms)

                                                
                                                
** stderr ** 
	W1212 23:30:49.705621    2484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-5bc68d56bd-4rg9t does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:554: Pod busybox-5bc68d56bd-4rg9t could not resolve 'kubernetes.io': exit status 1
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- nslookup kubernetes.io: (1.7245984s)
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.default: exit status 1 (408.8934ms)

                                                
                                                
** stderr ** 
	W1212 23:30:51.848572    1060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-5bc68d56bd-4rg9t does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:564: Pod busybox-5bc68d56bd-4rg9t could not resolve 'kubernetes.default': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (408.0001ms)

                                                
                                                
** stderr ** 
	W1212 23:30:52.881582    4516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-5bc68d56bd-4rg9t does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:572: Pod busybox-5bc68d56bd-4rg9t could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- nslookup kubernetes.default.svc.cluster.local
E1212 23:30:53.175694   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (12.0659662s)
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.3154237s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| mount   | C:\Users\jenkins.minikube7:/minikube-host         | mount-start-2-459600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:10 UTC |                     |
	|         | --profile mount-start-2-459600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-459600 ssh -- ls                    | mount-start-2-459600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:11 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-459600                           | mount-start-2-459600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:11 UTC |
	| delete  | -p mount-start-1-459600                           | mount-start-1-459600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:11 UTC |
	| start   | -p multinode-392000                               | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- apply -f                   | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC | 12 Dec 23 23:18 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- rollout                    | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000     | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:31:13 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         16 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              16 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         16 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         16 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         16 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         16 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         16 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:31:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                16m                kubelet          Node multinode-392000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.245805Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"93ff368cdeea47a1","initial-advertise-peer-urls":["https://172.30.51.245:2380"],"listen-peer-urls":["https://172.30.51.245:2380"],"advertise-client-urls":["https://172.30.51.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.51.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:14:20.357692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	
	* 
	* ==> kernel <==
	*  23:31:13 up 18 min,  0 users,  load average: 0.61, 0.64, 0.45
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:29:11.881063       1 main.go:227] handling current node
	I1212 23:29:21.886716       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:29:21.886767       1 main.go:227] handling current node
	I1212 23:29:31.898401       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:29:31.898892       1 main.go:227] handling current node
	I1212 23:29:41.904764       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:29:41.904823       1 main.go:227] handling current node
	I1212 23:29:51.913776       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:29:51.913929       1 main.go:227] handling current node
	I1212 23:30:01.927979       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:01.928479       1 main.go:227] handling current node
	I1212 23:30:11.936946       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:11.937039       1 main.go:227] handling current node
	I1212 23:30:21.946071       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:21.946116       1 main.go:227] handling current node
	I1212 23:30:31.952473       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:31.952512       1 main.go:227] handling current node
	I1212 23:30:41.958156       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:41.958302       1 main.go:227] handling current node
	I1212 23:30:51.966359       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:51.966473       1 main.go:227] handling current node
	I1212 23:31:01.971984       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:01.972103       1 main.go:227] handling current node
	I1212 23:31:11.982740       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:11.982781       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:39.402470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="640.526163ms"
	I1212 23:14:39.423878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.350638ms"
	I1212 23:14:39.455212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.288269ms"
	I1212 23:14:39.455353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.7µs"
	I1212 23:14:39.653487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:14:39.680197       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-5g8ks"
	I1212 23:14:39.711806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.664787ms"
	I1212 23:14:39.734721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.862413ms"
	I1212 23:14:39.785084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.307746ms"
	I1212 23:14:39.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.699µs"
	I1212 23:14:55.812545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.499µs"
	I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:18:56.342092       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:18:56.360783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x7ldl"
	I1212 23:18:56.372461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4rg9t"
	I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:31:13 UTC. --
	Dec 12 23:24:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:25:27 multinode-392000 kubelet[2682]: E1212 23:25:27.002943    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:25:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:25:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:25:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:26:27 multinode-392000 kubelet[2682]: E1212 23:26:27.002191    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:27:27 multinode-392000 kubelet[2682]: E1212 23:27:27.001369    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:28:27 multinode-392000 kubelet[2682]: E1212 23:28:27.001779    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:29:27 multinode-392000 kubelet[2682]: E1212 23:29:27.005449    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:27 multinode-392000 kubelet[2682]: E1212 23:30:27.005887    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [f6b34e581fc6] <==
	* I1212 23:14:57.324469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:14:57.354186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:14:57.354226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:14:57.375032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:14:57.377324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	I1212 23:14:57.379047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"843046f3-0fcd-4f8f-8bbf-0d83d2c229ac", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1 became leader
	I1212 23:14:57.478231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:31:05.893230    7940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
E1212 23:31:22.641069   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:31:25.440144   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (11.9007532s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-4rg9t
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t
helpers_test.go:282: (dbg) kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-4rg9t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrqjf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hrqjf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m30s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (751.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (45.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:588: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-4rg9t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (435.251ms)

                                                
                                                
** stderr ** 
	W1212 23:31:28.214780   15200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-5bc68d56bd-4rg9t does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:590: Pod busybox-5bc68d56bd-4rg9t could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- sh -c "ping -c 1 172.30.48.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-392000 -- exec busybox-5bc68d56bd-x7ldl -- sh -c "ping -c 1 172.30.48.1": exit status 1 (10.5142903s)

                                                
                                                
-- stdout --
	PING 172.30.48.1 (172.30.48.1): 56 data bytes
	
	--- 172.30.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:31:29.182650    5412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.30.48.1) from pod (busybox-5bc68d56bd-x7ldl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (11.936618s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.2549703s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-392000                               | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- apply -f                   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC | 12 Dec 23 23:18 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- rollout                    | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | busybox-5bc68d56bd-x7ldl                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-x7ldl -- sh                    |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.48.1                          |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:31:59 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         17 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              17 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         17 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         17 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         17 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         17 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         17 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170703s
	[INFO] 10.244.0.3:48895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108502s
	[INFO] 10.244.0.3:34622 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141402s
	[INFO] 10.244.0.3:36375 - 5 "PTR IN 1.48.30.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000268705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:31:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:29:48 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                17m                kubelet          Node multinode-392000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.245805Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"93ff368cdeea47a1","initial-advertise-peer-urls":["https://172.30.51.245:2380"],"listen-peer-urls":["https://172.30.51.245:2380"],"advertise-client-urls":["https://172.30.51.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.51.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:14:20.357692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	
	* 
	* ==> kernel <==
	*  23:31:59 up 19 min,  0 users,  load average: 0.54, 0.61, 0.45
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:29:51.913929       1 main.go:227] handling current node
	I1212 23:30:01.927979       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:01.928479       1 main.go:227] handling current node
	I1212 23:30:11.936946       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:11.937039       1 main.go:227] handling current node
	I1212 23:30:21.946071       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:21.946116       1 main.go:227] handling current node
	I1212 23:30:31.952473       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:31.952512       1 main.go:227] handling current node
	I1212 23:30:41.958156       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:41.958302       1 main.go:227] handling current node
	I1212 23:30:51.966359       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:30:51.966473       1 main.go:227] handling current node
	I1212 23:31:01.971984       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:01.972103       1 main.go:227] handling current node
	I1212 23:31:11.982740       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:11.982781       1 main.go:227] handling current node
	I1212 23:31:21.992953       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:21.993117       1 main.go:227] handling current node
	I1212 23:31:32.007307       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:32.007409       1 main.go:227] handling current node
	I1212 23:31:42.021760       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:42.021833       1 main.go:227] handling current node
	I1212 23:31:52.035674       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:31:52.035707       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:39.402470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="640.526163ms"
	I1212 23:14:39.423878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.350638ms"
	I1212 23:14:39.455212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.288269ms"
	I1212 23:14:39.455353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.7µs"
	I1212 23:14:39.653487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 23:14:39.680197       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-5g8ks"
	I1212 23:14:39.711806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.664787ms"
	I1212 23:14:39.734721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.862413ms"
	I1212 23:14:39.785084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.307746ms"
	I1212 23:14:39.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.699µs"
	I1212 23:14:55.812545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.499µs"
	I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:18:56.342092       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:18:56.360783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x7ldl"
	I1212 23:18:56.372461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4rg9t"
	I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:31:59 UTC. --
	Dec 12 23:25:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:26:27 multinode-392000 kubelet[2682]: E1212 23:26:27.002191    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:26:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:27:27 multinode-392000 kubelet[2682]: E1212 23:27:27.001369    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:27:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:28:27 multinode-392000 kubelet[2682]: E1212 23:28:27.001779    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:28:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:29:27 multinode-392000 kubelet[2682]: E1212 23:29:27.005449    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:27 multinode-392000 kubelet[2682]: E1212 23:30:27.005887    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:31:27 multinode-392000 kubelet[2682]: E1212 23:31:27.017227    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [f6b34e581fc6] <==
	* I1212 23:14:57.324469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:14:57.354186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:14:57.354226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:14:57.375032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:14:57.377324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	I1212 23:14:57.379047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"843046f3-0fcd-4f8f-8bbf-0d83d2c229ac", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1 became leader
	I1212 23:14:57.478231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-392000_83cb9dad-c506-4432-a6fc-8b939da966e1!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:31:51.627791   11068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (12.0653624s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-4rg9t
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t
helpers_test.go:282: (dbg) kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-4rg9t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrqjf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hrqjf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m16s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (45.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (250.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-392000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-392000 -v 3 --alsologtostderr: (3m1.1353778s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr
multinode_test.go:117: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr: exit status 2 (35.4706454s)

                                                
                                                
-- stdout --
	multinode-392000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-392000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-392000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:35:14.760262   11436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 23:35:14.840390   11436 out.go:296] Setting OutFile to fd 884 ...
	I1212 23:35:14.841304   11436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:35:14.841304   11436 out.go:309] Setting ErrFile to fd 664...
	I1212 23:35:14.841304   11436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:35:14.855304   11436 out.go:303] Setting JSON to false
	I1212 23:35:14.855304   11436 mustload.go:65] Loading cluster: multinode-392000
	I1212 23:35:14.855304   11436 notify.go:220] Checking for updates...
	I1212 23:35:14.856026   11436 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:35:14.856561   11436 status.go:255] checking status of multinode-392000 ...
	I1212 23:35:14.857903   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:35:17.032770   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:17.032851   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:17.032957   11436 status.go:330] multinode-392000 host status = "Running" (err=<nil>)
	I1212 23:35:17.032957   11436 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:35:17.033963   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:35:19.181877   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:19.181927   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:19.181927   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:21.708170   11436 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:35:21.708358   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:21.708358   11436 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:35:21.724564   11436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:35:21.725564   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:35:23.867020   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:23.867344   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:23.867463   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:26.405676   11436 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:35:26.405948   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:26.406749   11436 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:35:26.507746   11436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7820436s)
	I1212 23:35:26.523032   11436 ssh_runner.go:195] Run: systemctl --version
	I1212 23:35:26.546189   11436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:35:26.569625   11436 kubeconfig.go:92] found "multinode-392000" server: "https://172.30.51.245:8443"
	I1212 23:35:26.569625   11436 api_server.go:166] Checking apiserver status ...
	I1212 23:35:26.583464   11436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:35:26.619603   11436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I1212 23:35:26.634391   11436 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203"
	I1212 23:35:26.650861   11436 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203/freezer.state
	I1212 23:35:26.669180   11436 api_server.go:204] freezer state: "THAWED"
	I1212 23:35:26.669180   11436 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:35:26.678151   11436 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:35:26.678186   11436 status.go:421] multinode-392000 apiserver status = Running (err=<nil>)
	I1212 23:35:26.678186   11436 status.go:257] multinode-392000 status: &{Name:multinode-392000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:35:26.678186   11436 status.go:255] checking status of multinode-392000-m02 ...
	I1212 23:35:26.678879   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:35:28.752224   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:28.752224   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:28.752439   11436 status.go:330] multinode-392000-m02 host status = "Running" (err=<nil>)
	I1212 23:35:28.752439   11436 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:35:28.753392   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:35:30.935099   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:30.935185   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:30.935264   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:33.556431   11436 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:35:33.556431   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:33.556517   11436 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:35:33.571252   11436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:35:33.571252   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:35:35.675019   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:35.675019   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:35.675110   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:38.197883   11436 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:35:38.197883   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:38.198454   11436 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:35:38.299948   11436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7286745s)
	I1212 23:35:38.315280   11436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:35:38.338565   11436 status.go:257] multinode-392000-m02 status: &{Name:multinode-392000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:35:38.338635   11436 status.go:255] checking status of multinode-392000-m03 ...
	I1212 23:35:38.339196   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:35:40.500147   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:40.500371   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:40.500371   11436 status.go:330] multinode-392000-m03 host status = "Running" (err=<nil>)
	I1212 23:35:40.500371   11436 host.go:66] Checking if "multinode-392000-m03" exists ...
	I1212 23:35:40.501044   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:35:42.660643   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:42.661016   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:42.661116   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m03 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:45.221967   11436 main.go:141] libmachine: [stdout =====>] : 172.30.48.192
	
	I1212 23:35:45.222105   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:45.222105   11436 host.go:66] Checking if "multinode-392000-m03" exists ...
	I1212 23:35:45.235863   11436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:35:45.235863   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:35:47.350170   11436 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:35:47.350170   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:47.350246   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m03 ).networkadapters[0]).ipaddresses[0]
	I1212 23:35:49.906921   11436 main.go:141] libmachine: [stdout =====>] : 172.30.48.192
	
	I1212 23:35:49.907071   11436 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:35:49.907649   11436 sshutil.go:53] new ssh client: &{IP:172.30.48.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m03\id_rsa Username:docker}
	I1212 23:35:50.024313   11436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7884284s)
	I1212 23:35:50.039085   11436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:35:50.062653   11436 status.go:257] multinode-392000-m03 status: &{Name:multinode-392000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:119: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
E1212 23:35:53.185443   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (11.9756046s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.2779011s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-392000 -- apply -f                   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC | 12 Dec 23 23:18 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- rollout                    | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | busybox-5bc68d56bd-x7ldl                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-x7ldl -- sh                    |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.48.1                          |                  |                   |         |                     |                     |
	| node    | add -p multinode-392000 -v 3                      | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:32 UTC | 12 Dec 23 23:35 UTC |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:36:09 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         21 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              21 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         21 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         21 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         21 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         21 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         21 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170703s
	[INFO] 10.244.0.3:48895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108502s
	[INFO] 10.244.0.3:34622 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141402s
	[INFO] 10.244.0.3:36375 - 5 "PTR IN 1.48.30.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000268705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:36:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                21m                kubelet          Node multinode-392000 status is now: NodeReady
	
	
	Name:               multinode-392000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_34_53_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:36:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:35:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.48.192
	  Hostname:    multinode-392000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d64f283fdbd04ec2abf7a123575a634e
	  System UUID:                93e58034-5f25-104c-8ce8-7830c4ca3032
	  Boot ID:                    c6343bf3-5b49-4ca9-a1db-9a4a9b9458e8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gl8th       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      78s
	  kube-system                 kube-proxy-rmg5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x2 over 78s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x2 over 78s)  kubelet          Node multinode-392000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x2 over 78s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           77s                node-controller  Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-392000-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	{"level":"info","ts":"2023-12-12T23:34:20.436518Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2023-12-12T23:34:20.438268Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1163,"took":"858.507µs","hash":3676843287}
	{"level":"info","ts":"2023-12-12T23:34:20.438371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3676843287,"revision":1163,"compact-revision":922}
	
	* 
	* ==> kernel <==
	*  23:36:10 up 23 min,  0 users,  load average: 0.28, 0.44, 0.42
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:35:02.300673       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:35:12.312152       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:35:12.312302       1 main.go:227] handling current node
	I1212 23:35:12.312316       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:35:12.312325       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:35:22.325567       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:35:22.325645       1 main.go:227] handling current node
	I1212 23:35:22.325659       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:35:22.325667       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:35:32.332399       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:35:32.332486       1 main.go:227] handling current node
	I1212 23:35:32.332502       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:35:32.332510       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:35:42.348805       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:35:42.348886       1 main.go:227] handling current node
	I1212 23:35:42.348899       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:35:42.348907       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:35:52.364433       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:35:52.364463       1 main.go:227] handling current node
	I1212 23:35:52.364476       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:35:52.364482       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:36:02.379396       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:36:02.379496       1 main.go:227] handling current node
	I1212 23:36:02.379528       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:36:02.379536       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:39.734721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.862413ms"
	I1212 23:14:39.785084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.307746ms"
	I1212 23:14:39.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.699µs"
	I1212 23:14:55.812545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.499µs"
	I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:18:56.342092       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:18:56.360783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x7ldl"
	I1212 23:18:56.372461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4rg9t"
	I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	I1212 23:34:52.106307       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-392000-m03\" does not exist"
	I1212 23:34:52.120727       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-392000-m03" podCIDRs=["10.244.1.0/24"]
	I1212 23:34:52.134312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rmg5p"
	I1212 23:34:52.139634       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gl8th"
	I1212 23:34:53.581868       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-392000-m03"
	I1212 23:34:53.582294       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller"
	I1212 23:35:12.788142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-392000-m03"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:36:10 UTC. --
	Dec 12 23:29:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:27 multinode-392000 kubelet[2682]: E1212 23:30:27.005887    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:31:27 multinode-392000 kubelet[2682]: E1212 23:31:27.017227    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:32:27 multinode-392000 kubelet[2682]: E1212 23:32:27.001857    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:33:27 multinode-392000 kubelet[2682]: E1212 23:33:27.003252    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:34:27 multinode-392000 kubelet[2682]: E1212 23:34:27.005543    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:27 multinode-392000 kubelet[2682]: E1212 23:35:27.004961    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:36:02.195067    1076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
E1212 23:36:22.631414   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (11.9703201s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-4rg9t
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/AddNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t
helpers_test.go:282: (dbg) kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-4rg9t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrqjf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hrqjf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m27s (x4 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (250.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (69.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-392000 status --output json --alsologtostderr: exit status 2 (35.1456139s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-392000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-392000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-392000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:36:31.939443    3204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 23:36:32.016417    3204 out.go:296] Setting OutFile to fd 860 ...
	I1212 23:36:32.017224    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:32.017224    3204 out.go:309] Setting ErrFile to fd 840...
	I1212 23:36:32.017303    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:32.031296    3204 out.go:303] Setting JSON to true
	I1212 23:36:32.031296    3204 mustload.go:65] Loading cluster: multinode-392000
	I1212 23:36:32.031296    3204 notify.go:220] Checking for updates...
	I1212 23:36:32.031973    3204 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:36:32.031973    3204 status.go:255] checking status of multinode-392000 ...
	I1212 23:36:32.032698    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:36:34.172734    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:34.172846    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:34.172846    3204 status.go:330] multinode-392000 host status = "Running" (err=<nil>)
	I1212 23:36:34.172938    3204 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:36:34.173610    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:36:36.322577    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:36.322618    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:36.322652    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:36:38.892317    3204 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:36:38.892317    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:38.892419    3204 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:36:38.906601    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:36:38.907639    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:36:41.054528    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:41.054720    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:41.054871    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:36:43.559124    3204 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:36:43.559124    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:43.559918    3204 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:36:43.661773    3204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7541131s)
	I1212 23:36:43.679686    3204 ssh_runner.go:195] Run: systemctl --version
	I1212 23:36:43.700711    3204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:36:43.723473    3204 kubeconfig.go:92] found "multinode-392000" server: "https://172.30.51.245:8443"
	I1212 23:36:43.723645    3204 api_server.go:166] Checking apiserver status ...
	I1212 23:36:43.738554    3204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:36:43.772023    3204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I1212 23:36:43.789154    3204 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203"
	I1212 23:36:43.802231    3204 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203/freezer.state
	I1212 23:36:43.816971    3204 api_server.go:204] freezer state: "THAWED"
	I1212 23:36:43.816971    3204 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:36:43.826483    3204 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:36:43.826483    3204 status.go:421] multinode-392000 apiserver status = Running (err=<nil>)
	I1212 23:36:43.826483    3204 status.go:257] multinode-392000 status: &{Name:multinode-392000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:36:43.826483    3204 status.go:255] checking status of multinode-392000-m02 ...
	I1212 23:36:43.828303    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:36:45.943715    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:45.943715    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:45.943798    3204 status.go:330] multinode-392000-m02 host status = "Running" (err=<nil>)
	I1212 23:36:45.943798    3204 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:36:45.944574    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:36:48.040929    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:48.041045    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:48.041104    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:36:50.561312    3204 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:36:50.561312    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:50.561440    3204 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:36:50.577407    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:36:50.577407    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:36:52.662498    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:52.662584    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:52.662584    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:36:55.192657    3204 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:36:55.192883    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:55.193373    3204 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:36:55.295312    3204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.717884s)
	I1212 23:36:55.308949    3204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:36:55.330238    3204 status.go:257] multinode-392000-m02 status: &{Name:multinode-392000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:36:55.330238    3204 status.go:255] checking status of multinode-392000-m03 ...
	I1212 23:36:55.331183    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:36:57.459868    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:57.459868    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:57.459947    3204 status.go:330] multinode-392000-m03 host status = "Running" (err=<nil>)
	I1212 23:36:57.459947    3204 host.go:66] Checking if "multinode-392000-m03" exists ...
	I1212 23:36:57.461089    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:36:59.561954    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:36:59.562300    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:36:59.562300    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m03 ).networkadapters[0]).ipaddresses[0]
	I1212 23:37:02.084028    3204 main.go:141] libmachine: [stdout =====>] : 172.30.48.192
	
	I1212 23:37:02.084028    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:37:02.084028    3204 host.go:66] Checking if "multinode-392000-m03" exists ...
	I1212 23:37:02.098335    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:37:02.098335    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:37:04.240334    3204 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:37:04.240491    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:37:04.240572    3204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m03 ).networkadapters[0]).ipaddresses[0]
	I1212 23:37:06.764343    3204 main.go:141] libmachine: [stdout =====>] : 172.30.48.192
	
	I1212 23:37:06.764528    3204 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:37:06.765017    3204 sshutil.go:53] new ssh client: &{IP:172.30.48.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m03\id_rsa Username:docker}
	I1212 23:37:06.884027    3204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7845201s)
	I1212 23:37:06.900744    3204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:37:06.921003    3204 status.go:257] multinode-392000-m03 status: &{Name:multinode-392000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:176: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-392000 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (11.9814041s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.4775204s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-392000 -- apply -f                   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC | 12 Dec 23 23:18 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- rollout                    | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o                | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | busybox-5bc68d56bd-x7ldl                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec                       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-x7ldl -- sh                    |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.48.1                          |                  |                   |         |                     |                     |
	| node    | add -p multinode-392000 -v 3                      | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:32 UTC | 12 Dec 23 23:35 UTC |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:37:26 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         22 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              22 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         22 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         23 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         23 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         23 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         23 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170703s
	[INFO] 10.244.0.3:48895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108502s
	[INFO] 10.244.0.3:34622 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141402s
	[INFO] 10.244.0.3:36375 - 5 "PTR IN 1.48.30.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000268705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:37:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                22m                kubelet          Node multinode-392000 status is now: NodeReady
	
	
	Name:               multinode-392000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_34_53_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:37:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:34:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:35:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.48.192
	  Hostname:    multinode-392000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d64f283fdbd04ec2abf7a123575a634e
	  System UUID:                93e58034-5f25-104c-8ce8-7830c4ca3032
	  Boot ID:                    c6343bf3-5b49-4ca9-a1db-9a4a9b9458e8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gl8th       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m34s
	  kube-system                 kube-proxy-rmg5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m34s (x2 over 2m34s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x2 over 2m34s)  kubelet          Node multinode-392000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x2 over 2m34s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m33s                  node-controller  Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller
	  Normal  NodeReady                2m14s                  kubelet          Node multinode-392000-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	{"level":"info","ts":"2023-12-12T23:34:20.436518Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2023-12-12T23:34:20.438268Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1163,"took":"858.507µs","hash":3676843287}
	{"level":"info","ts":"2023-12-12T23:34:20.438371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3676843287,"revision":1163,"compact-revision":922}
	
	* 
	* ==> kernel <==
	*  23:37:27 up 25 min,  0 users,  load average: 0.13, 0.35, 0.39
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:36:22.398335       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:36:32.410587       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:36:32.410716       1 main.go:227] handling current node
	I1212 23:36:32.410755       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:36:32.410771       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:36:42.425638       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:36:42.425864       1 main.go:227] handling current node
	I1212 23:36:42.425897       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:36:42.425907       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:36:52.432677       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:36:52.432722       1 main.go:227] handling current node
	I1212 23:36:52.432736       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:36:52.432744       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:37:02.440862       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:37:02.440961       1 main.go:227] handling current node
	I1212 23:37:02.440977       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:37:02.440985       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:37:12.452303       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:37:12.452854       1 main.go:227] handling current node
	I1212 23:37:12.453005       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:37:12.453018       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:37:22.463267       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:37:22.463397       1 main.go:227] handling current node
	I1212 23:37:22.463414       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:37:22.463422       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:39.734721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.862413ms"
	I1212 23:14:39.785084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.307746ms"
	I1212 23:14:39.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.699µs"
	I1212 23:14:55.812545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.499µs"
	I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:18:56.342092       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:18:56.360783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x7ldl"
	I1212 23:18:56.372461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4rg9t"
	I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	I1212 23:34:52.106307       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-392000-m03\" does not exist"
	I1212 23:34:52.120727       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-392000-m03" podCIDRs=["10.244.1.0/24"]
	I1212 23:34:52.134312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rmg5p"
	I1212 23:34:52.139634       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gl8th"
	I1212 23:34:53.581868       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-392000-m03"
	I1212 23:34:53.582294       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller"
	I1212 23:35:12.788142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-392000-m03"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:37:27 UTC. --
	Dec 12 23:31:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:32:27 multinode-392000 kubelet[2682]: E1212 23:32:27.001857    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:33:27 multinode-392000 kubelet[2682]: E1212 23:33:27.003252    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:34:27 multinode-392000 kubelet[2682]: E1212 23:34:27.005543    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:27 multinode-392000 kubelet[2682]: E1212 23:35:27.004961    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:36:27 multinode-392000 kubelet[2682]: E1212 23:36:27.005054    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:27 multinode-392000 kubelet[2682]: E1212 23:37:27.014710    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:37:19.081711   11260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (12.0977953s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-4rg9t
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t
helpers_test.go:282: (dbg) kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-4rg9t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrqjf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hrqjf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m44s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (69.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (99.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 node stop m03: (14.2443372s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-392000 status: exit status 7 (25.7126031s)

                                                
                                                
-- stdout --
	multinode-392000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-392000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-392000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:37:55.724051    3696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr: exit status 7 (25.4678505s)

                                                
                                                
-- stdout --
	multinode-392000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-392000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-392000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:38:21.441938    7132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 23:38:21.523821    7132 out.go:296] Setting OutFile to fd 728 ...
	I1212 23:38:21.524974    7132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:38:21.524974    7132 out.go:309] Setting ErrFile to fd 780...
	I1212 23:38:21.524974    7132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:38:21.544197    7132 out.go:303] Setting JSON to false
	I1212 23:38:21.544197    7132 mustload.go:65] Loading cluster: multinode-392000
	I1212 23:38:21.544197    7132 notify.go:220] Checking for updates...
	I1212 23:38:21.545258    7132 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:38:21.545258    7132 status.go:255] checking status of multinode-392000 ...
	I1212 23:38:21.546077    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:38:23.717618    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:23.717747    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:23.717747    7132 status.go:330] multinode-392000 host status = "Running" (err=<nil>)
	I1212 23:38:23.717747    7132 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:38:23.718679    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:38:25.862840    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:25.862840    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:25.862840    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:38:28.375584    7132 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:38:28.375584    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:28.375584    7132 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:38:28.391036    7132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:38:28.391613    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:38:30.445272    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:30.445272    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:30.445272    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:38:32.937308    7132 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:38:32.937356    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:32.937356    7132 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:38:33.039363    7132 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6477286s)
	I1212 23:38:33.054781    7132 ssh_runner.go:195] Run: systemctl --version
	I1212 23:38:33.076240    7132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:38:33.097389    7132 kubeconfig.go:92] found "multinode-392000" server: "https://172.30.51.245:8443"
	I1212 23:38:33.097389    7132 api_server.go:166] Checking apiserver status ...
	I1212 23:38:33.110174    7132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:38:33.142143    7132 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I1212 23:38:33.158085    7132 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203"
	I1212 23:38:33.171866    7132 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda728ade276b580d5a5541017805cb6e1/6c354edfe4229f128c63e6e81f9b8205c4c908288534b6c7e0dec3ef2529e203/freezer.state
	I1212 23:38:33.192219    7132 api_server.go:204] freezer state: "THAWED"
	I1212 23:38:33.192219    7132 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:38:33.200069    7132 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:38:33.200069    7132 status.go:421] multinode-392000 apiserver status = Running (err=<nil>)
	I1212 23:38:33.200435    7132 status.go:257] multinode-392000 status: &{Name:multinode-392000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:38:33.200435    7132 status.go:255] checking status of multinode-392000-m02 ...
	I1212 23:38:33.200530    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:38:35.281262    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:35.281262    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:35.281383    7132 status.go:330] multinode-392000-m02 host status = "Running" (err=<nil>)
	I1212 23:38:35.281383    7132 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:38:35.281738    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:38:37.389840    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:37.389840    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:37.389951    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:38:39.927004    7132 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:38:39.927112    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:39.927254    7132 host.go:66] Checking if "multinode-392000-m02" exists ...
	I1212 23:38:39.944381    7132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:38:39.944381    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:38:42.054981    7132 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:38:42.055235    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:42.055388    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:38:44.536220    7132 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:38:44.536302    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:44.536361    7132 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:38:44.638260    7132 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6937457s)
	I1212 23:38:44.652267    7132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:38:44.672613    7132 status.go:257] multinode-392000-m02 status: &{Name:multinode-392000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:38:44.672613    7132 status.go:255] checking status of multinode-392000-m03 ...
	I1212 23:38:44.673401    7132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m03 ).state
	I1212 23:38:46.734733    7132 main.go:141] libmachine: [stdout =====>] : Off
	
	I1212 23:38:46.734774    7132 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:38:46.734774    7132 status.go:330] multinode-392000-m03 host status = "Stopped" (err=<nil>)
	I1212 23:38:46.734774    7132 status.go:343] host is not running, skipping remaining checks
	I1212 23:38:46.734879    7132 status.go:257] multinode-392000-m03 status: &{Name:multinode-392000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr": multinode-392000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-392000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-392000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-392000 status --alsologtostderr": multinode-392000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-392000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-392000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (11.9571057s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.3280739s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-392000 -- rollout       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:18 UTC |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t             |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | busybox-5bc68d56bd-x7ldl             |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-x7ldl -- sh       |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.48.1             |                  |                   |         |                     |                     |
	| node    | add -p multinode-392000 -v 3         | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:32 UTC | 12 Dec 23 23:35 UTC |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-392000 node stop m03       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:39:06 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         24 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              24 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         24 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         24 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         24 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         24 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         24 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170703s
	[INFO] 10.244.0.3:48895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108502s
	[INFO] 10.244.0.3:34622 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141402s
	[INFO] 10.244.0.3:36375 - 5 "PTR IN 1.48.30.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000268705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:39:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:34:55 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                24m                kubelet          Node multinode-392000 status is now: NodeReady
	
	
	Name:               multinode-392000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_34_53_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:34:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:37:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:38:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:38:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:38:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 12 Dec 2023 23:35:22 +0000   Tue, 12 Dec 2023 23:38:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.30.48.192
	  Hostname:    multinode-392000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d64f283fdbd04ec2abf7a123575a634e
	  System UUID:                93e58034-5f25-104c-8ce8-7830c4ca3032
	  Boot ID:                    c6343bf3-5b49-4ca9-a1db-9a4a9b9458e8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gl8th       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m14s
	  kube-system                 kube-proxy-rmg5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x2 over 4m14s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x2 over 4m14s)  kubelet          Node multinode-392000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x2 over 4m14s)  kubelet          Node multinode-392000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller
	  Normal  NodeReady                3m54s                  kubelet          Node multinode-392000-m03 status is now: NodeReady
	  Normal  NodeNotReady             38s                    node-controller  Node multinode-392000-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.357792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgPreVoteResp from 93ff368cdeea47a1 at term 1"}
	{"level":"info","ts":"2023-12-12T23:14:20.357804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 received MsgVoteResp from 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	{"level":"info","ts":"2023-12-12T23:34:20.436518Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2023-12-12T23:34:20.438268Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1163,"took":"858.507µs","hash":3676843287}
	{"level":"info","ts":"2023-12-12T23:34:20.438371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3676843287,"revision":1163,"compact-revision":922}
	
	* 
	* ==> kernel <==
	*  23:39:06 up 26 min,  0 users,  load average: 0.18, 0.29, 0.36
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:38:02.500699       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:38:12.512417       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:38:12.512531       1 main.go:227] handling current node
	I1212 23:38:12.512546       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:38:12.512555       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:38:22.524771       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:38:22.524816       1 main.go:227] handling current node
	I1212 23:38:22.524912       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:38:22.524925       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:38:32.532581       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:38:32.532662       1 main.go:227] handling current node
	I1212 23:38:32.532676       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:38:32.532684       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:38:42.541583       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:38:42.541661       1 main.go:227] handling current node
	I1212 23:38:42.541675       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:38:42.541684       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:38:52.549034       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:38:52.549711       1 main.go:227] handling current node
	I1212 23:38:52.549956       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:38:52.549991       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:39:02.562503       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:39:02.562542       1 main.go:227] handling current node
	I1212 23:39:02.562557       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:39:02.562564       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:14:55.831423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.3µs"
	I1212 23:14:57.948826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.3µs"
	I1212 23:14:57.994852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.967283ms"
	I1212 23:14:57.995045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.9µs"
	I1212 23:14:58.351328       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:18:56.342092       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:18:56.360783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x7ldl"
	I1212 23:18:56.372461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4rg9t"
	I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	I1212 23:34:52.106307       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-392000-m03\" does not exist"
	I1212 23:34:52.120727       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-392000-m03" podCIDRs=["10.244.1.0/24"]
	I1212 23:34:52.134312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rmg5p"
	I1212 23:34:52.139634       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gl8th"
	I1212 23:34:53.581868       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-392000-m03"
	I1212 23:34:53.582294       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller"
	I1212 23:35:12.788142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-392000-m03"
	I1212 23:38:28.652412       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-392000-m03 status is now: NodeNotReady"
	I1212 23:38:28.666618       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-rmg5p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1212 23:38:28.680826       1 event.go:307] "Event occurred" object="kube-system/kindnet-gl8th" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1212 23:38:57.271941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="111.7µs"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:39:07 UTC. --
	Dec 12 23:32:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:33:27 multinode-392000 kubelet[2682]: E1212 23:33:27.003252    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:33:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:34:27 multinode-392000 kubelet[2682]: E1212 23:34:27.005543    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:27 multinode-392000 kubelet[2682]: E1212 23:35:27.004961    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:36:27 multinode-392000 kubelet[2682]: E1212 23:36:27.005054    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:27 multinode-392000 kubelet[2682]: E1212 23:37:27.014710    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:38:27 multinode-392000 kubelet[2682]: E1212 23:38:27.002495    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:38:58.864564    8916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (12.1034246s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-4rg9t
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/StopNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t
helpers_test.go:282: (dbg) kubectl --context multinode-392000 describe pod busybox-5bc68d56bd-4rg9t:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-4rg9t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrqjf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hrqjf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  5m23s (x4 over 20m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
	  Warning  FailedScheduling  23s                  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (99.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (162.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 node start m03 --alsologtostderr
E1212 23:39:25.867753   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:40:53.177762   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:41:08.657393   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 23:41:22.645976   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:41:25.443773   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 node start m03 --alsologtostderr: (2m8.6017199s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-392000 status: exit status 1 (470.5799ms)

                                                
                                                
** stderr ** 
	W1212 23:41:29.577899   14028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-392000 status" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-392000 -n multinode-392000: (12.0882259s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-392000 logs -n 25: (8.118127s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:29 UTC | 12 Dec 23 23:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl --          |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t -- nslookup |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:30 UTC | 12 Dec 23 23:30 UTC |
	|         | busybox-5bc68d56bd-x7ldl -- nslookup |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- get pods -o   | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-4rg9t             |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC | 12 Dec 23 23:31 UTC |
	|         | busybox-5bc68d56bd-x7ldl             |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-392000 -- exec          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:31 UTC |                     |
	|         | busybox-5bc68d56bd-x7ldl -- sh       |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.48.1             |                  |                   |         |                     |                     |
	| node    | add -p multinode-392000 -v 3         | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:32 UTC | 12 Dec 23 23:35 UTC |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-392000 node stop m03       | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	| node    | multinode-392000 node start          | multinode-392000 | minikube7\jenkins | v1.32.0 | 12 Dec 23 23:39 UTC | 12 Dec 23 23:41 UTC |
	|         | m03 --alsologtostderr                |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:11:30
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:11:30.070723    8472 out.go:296] Setting OutFile to fd 812 ...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.071716    8472 out.go:309] Setting ErrFile to fd 756...
	I1212 23:11:30.071716    8472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:11:30.094706    8472 out.go:303] Setting JSON to false
	I1212 23:11:30.097728    8472 start.go:128] hostinfo: {"hostname":"minikube7","uptime":76287,"bootTime":1702346402,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 23:11:30.097728    8472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 23:11:30.099331    8472 out.go:177] * [multinode-392000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 23:11:30.099722    8472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:11:30.099722    8472 notify.go:220] Checking for updates...
	I1212 23:11:30.100958    8472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:11:30.101483    8472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 23:11:30.102516    8472 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:11:30.103354    8472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:11:30.104853    8472 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:11:35.379035    8472 out.go:177] * Using the hyperv driver based on user configuration
	I1212 23:11:35.380001    8472 start.go:298] selected driver: hyperv
	I1212 23:11:35.380001    8472 start.go:902] validating driver "hyperv" against <nil>
	I1212 23:11:35.380001    8472 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:11:35.430879    8472 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:11:35.431976    8472 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:11:35.432174    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:11:35.432174    8472 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:11:35.432174    8472 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:11:35.432174    8472 start_flags.go:323] config:
	{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:35.432785    8472 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:11:35.434592    8472 out.go:177] * Starting control plane node multinode-392000 in cluster multinode-392000
	I1212 23:11:35.434882    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:11:35.435410    8472 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 23:11:35.435444    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:11:35.435894    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:11:35.435894    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:11:35.436458    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:11:35.436458    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json: {Name:mk07adc881ba1a1ec87edb34c2760e84e9f12eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:35.438010    8472 start.go:365] acquiring machines lock for multinode-392000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:11:35.438172    8472 start.go:369] acquired machines lock for "multinode-392000" in 43.3µs
	I1212 23:11:35.438240    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:11:35.438240    8472 start.go:125] createHost starting for "" (driver="hyperv")
	I1212 23:11:35.439294    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:11:35.439734    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:11:35.439996    8472 client.go:168] LocalClient.Create starting
	I1212 23:11:35.440162    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.440859    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441050    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:11:35.441323    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:11:35.441543    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:11:37.487993    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:37.488170    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:11:39.204044    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:11:39.204143    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:39.204222    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:40.663065    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:40.663233    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:44.190819    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:44.191081    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:44.194062    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:11:44.711737    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: Creating VM...
	I1212 23:11:44.974138    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:11:47.732456    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:11:47.732576    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:47.732727    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:11:47.732880    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:11:49.467956    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:11:49.468070    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:49.468070    8472 main.go:141] libmachine: Creating VHD
	I1212 23:11:49.468208    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F469FE2D-E21B-45E1-BE12-1FCB18DB12B2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:11:53.098969    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:11:53.099306    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:11:53.108721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:56.276467    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:56.276637    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd' -SizeBytes 20000MB
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:11:58.764583    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:11:58.764692    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-392000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:02.257034    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000 -DynamicMemoryEnabled $false
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:04.436243    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:04.436332    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000 -Count 2
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:06.523889    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\boot2docker.iso'
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:09.183414    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\disk.vhd'
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:11.817801    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:11.817904    8472 main.go:141] libmachine: Starting VM...
	I1212 23:12:11.817904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:14.636639    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:12:14.636759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:16.857062    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:16.857260    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:16.857330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:19.371072    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:20.386945    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:22.605793    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:22.605951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:25.176543    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:26.191747    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:28.348821    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:28.349104    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:30.824944    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:30.825184    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:31.825449    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:33.970275    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:36.445712    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:12:36.445785    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:37.459217    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:39.667912    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:42.223396    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:42.223526    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:44.305043    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:44.305406    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:44.305406    8472 machine.go:88] provisioning docker machine ...
	I1212 23:12:44.305506    8472 buildroot.go:166] provisioning hostname "multinode-392000"
	I1212 23:12:44.305650    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:46.463622    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:46.463699    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:48.946017    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:48.946116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:48.952068    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:48.964084    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:48.964084    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000 && echo "multinode-392000" | sudo tee /etc/hostname
	I1212 23:12:49.130659    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000
	
	I1212 23:12:49.130793    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:51.216329    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:51.216440    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:53.719384    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:53.725386    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:12:53.726016    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:12:53.726016    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:12:53.876910    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:12:53.876910    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:12:53.877039    8472 buildroot.go:174] setting up certificates
	I1212 23:12:53.877109    8472 provision.go:83] configureAuth start
	I1212 23:12:53.877163    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:12:55.991772    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:55.992098    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:12:58.499383    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:12:58.499603    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:00.594939    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:00.595022    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:03.100178    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:03.100273    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:03.100273    8472 provision.go:138] copyHostCerts
	I1212 23:13:03.100538    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:13:03.100666    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:13:03.100666    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:13:03.101260    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:13:03.102786    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:13:03.103156    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:13:03.103156    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:13:03.103581    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:13:03.104593    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:13:03.105032    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:13:03.105032    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:13:03.105182    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:13:03.106302    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000 san=[172.30.51.245 172.30.51.245 localhost 127.0.0.1 minikube multinode-392000]
	I1212 23:13:03.360027    8472 provision.go:172] copyRemoteCerts
	I1212 23:13:03.374057    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:13:03.374057    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:05.470598    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:08.007608    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:08.008195    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:08.116237    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7420653s)
	I1212 23:13:08.116237    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:13:08.116427    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:13:08.152557    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:13:08.153040    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:13:08.195988    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:13:08.196559    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:13:08.232338    8472 provision.go:86] duration metric: configureAuth took 14.3551646s
	I1212 23:13:08.232338    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:13:08.233351    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:13:08.233351    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:10.326980    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:10.327281    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:12.824323    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:12.830327    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:12.831103    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:12.831103    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:13:12.971332    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:13:12.971397    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:13:12.971686    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:13:12.971759    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:15.048938    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:17.524781    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:17.524929    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:17.532264    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:17.532875    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:17.533036    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:13:17.693682    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:13:17.693682    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:19.797590    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:19.797719    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:22.305428    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:22.305611    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:22.311364    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:22.312148    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:22.312148    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:13:23.268460    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:13:23.268460    8472 machine.go:91] provisioned docker machine in 38.9628792s
	I1212 23:13:23.268460    8472 client.go:171] LocalClient.Create took 1m47.8279792s
	I1212 23:13:23.268460    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m47.8282413s
	I1212 23:13:23.268460    8472 start.go:300] post-start starting for "multinode-392000" (driver="hyperv")
	I1212 23:13:23.268460    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:13:23.283134    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:13:23.283134    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:25.344143    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:25.344398    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:25.344531    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:27.853202    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:27.853202    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:27.960465    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6773102s)
	I1212 23:13:27.975019    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:13:27.981168    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:13:27.981317    8472 command_runner.go:130] > ID=buildroot
	I1212 23:13:27.981317    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:13:27.981317    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:13:27.981408    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:13:27.981509    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:13:27.981573    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:13:27.982899    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:13:27.982899    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:13:27.996731    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:13:28.011281    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:13:28.049499    8472 start.go:303] post-start completed in 4.7810169s
	I1212 23:13:28.051903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:30.124373    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:30.124520    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:32.635986    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:32.636168    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:32.636335    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:13:32.639612    8472 start.go:128] duration metric: createHost completed in 1m57.2008454s
	I1212 23:13:32.639734    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:34.733628    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:37.246381    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:37.252006    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:37.252675    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:37.252675    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422817.389981544
	
	I1212 23:13:37.394466    8472 fix.go:206] guest clock: 1702422817.389981544
	I1212 23:13:37.394466    8472 fix.go:219] Guest: 2023-12-12 23:13:37.389981544 +0000 UTC Remote: 2023-12-12 23:13:32.6396781 +0000 UTC m=+122.746612401 (delta=4.750303444s)
	I1212 23:13:37.394466    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:39.525843    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:39.525951    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:42.048856    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:42.049171    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:42.054999    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:13:42.057020    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.51.245 22 <nil> <nil>}
	I1212 23:13:42.057020    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702422817
	I1212 23:13:42.207558    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:13:37 UTC 2023
	
	I1212 23:13:42.207558    8472 fix.go:226] clock set: Tue Dec 12 23:13:37 UTC 2023
	 (err=<nil>)
	I1212 23:13:42.207558    8472 start.go:83] releasing machines lock for "multinode-392000", held for 2m6.7687735s
	I1212 23:13:42.208388    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:44.275265    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:46.748039    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:46.748116    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:46.752230    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:13:46.752339    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:46.765270    8472 ssh_runner.go:195] Run: cat /version.json
	I1212 23:13:46.765814    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:48.940095    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:48.940372    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:13:51.518393    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.518589    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.519047    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:13:51.538089    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:13:51.538571    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:13:51.618146    8472 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 23:13:51.618146    8472 ssh_runner.go:235] Completed: cat /version.json: (4.8528548s)
	I1212 23:13:51.632470    8472 ssh_runner.go:195] Run: systemctl --version
	I1212 23:13:51.705182    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:13:51.705326    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9530322s)
	I1212 23:13:51.705474    8472 command_runner.go:130] > systemd 247 (247)
	I1212 23:13:51.705474    8472 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:13:51.717133    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:13:51.725591    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:13:51.726008    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:13:51.738060    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:13:51.760525    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:13:51.761431    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:13:51.761431    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:51.761737    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:51.787290    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:13:51.802604    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:13:51.833298    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:13:51.849124    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:13:51.865424    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:13:51.896430    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.925062    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:13:51.954292    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:13:51.986199    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:13:52.018341    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:13:52.051014    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:13:52.066722    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:13:52.079021    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:13:52.108672    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:52.285653    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:13:52.311279    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:13:52.326723    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Unit]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:13:52.345659    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:13:52.345659    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:13:52.345659    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:13:52.345659    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:13:52.345659    8472 command_runner.go:130] > [Service]
	I1212 23:13:52.345659    8472 command_runner.go:130] > Type=notify
	I1212 23:13:52.345659    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:13:52.345659    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:13:52.346602    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:13:52.346602    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:13:52.346602    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:13:52.346602    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:13:52.346602    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:13:52.346602    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:13:52.346602    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:13:52.346602    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:13:52.346602    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:13:52.346602    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:13:52.346602    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:13:52.346602    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:13:52.346602    8472 command_runner.go:130] > Delegate=yes
	I1212 23:13:52.346602    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:13:52.346602    8472 command_runner.go:130] > KillMode=process
	I1212 23:13:52.346602    8472 command_runner.go:130] > [Install]
	I1212 23:13:52.346602    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:13:52.361605    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.398612    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:13:52.438497    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:13:52.478249    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.515469    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:13:52.572526    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:13:52.596922    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:13:52.625715    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:13:52.640295    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:13:52.648317    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:13:52.660918    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:13:52.675527    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:13:52.716542    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:13:52.882321    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:13:53.028395    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:13:53.028810    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:13:53.070347    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:53.231794    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:13:54.707655    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4758548s)
	I1212 23:13:54.722714    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:54.886957    8472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 23:13:55.059072    8472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 23:13:55.219495    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.397909    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 23:13:55.436243    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:13:55.597738    8472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 23:13:55.697504    8472 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 23:13:55.711625    8472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 23:13:55.718995    8472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:13:55.718995    8472 command_runner.go:130] > Device: 16h/22d	Inode: 928         Links: 1
	I1212 23:13:55.718995    8472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 23:13:55.719086    8472 command_runner.go:130] > Access: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Modify: 2023-12-12 23:13:55.612702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] > Change: 2023-12-12 23:13:55.617702172 +0000
	I1212 23:13:55.719086    8472 command_runner.go:130] >  Birth: -
	I1212 23:13:55.719245    8472 start.go:543] Will wait 60s for crictl version
	I1212 23:13:55.732224    8472 ssh_runner.go:195] Run: which crictl
	I1212 23:13:55.737239    8472 command_runner.go:130] > /usr/bin/crictl
	I1212 23:13:55.751402    8472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:13:55.821560    8472 command_runner.go:130] > Version:  0.1.0
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeName:  docker
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 23:13:55.821560    8472 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:13:55.821684    8472 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 23:13:55.831458    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.865302    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.877867    8472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 23:13:55.906635    8472 command_runner.go:130] > 24.0.7
	I1212 23:13:55.909704    8472 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 23:13:55.909704    8472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1212 23:13:55.915499    8472 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:bf:68:bc Flags:up|broadcast|multicast|running}
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: fe80::d4ef:20a3:a5e3:a481/64
	I1212 23:13:55.919105    8472 ip.go:210] interface addr: 172.30.48.1/20
	I1212 23:13:55.931095    8472 ssh_runner.go:195] Run: grep 172.30.48.1	host.minikube.internal$ /etc/hosts
	I1212 23:13:55.936984    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:13:55.954782    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:13:55.966850    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:13:55.989987    8472 docker.go:671] Got preloaded images: 
	I1212 23:13:55.989987    8472 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 23:13:56.002978    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:13:56.016572    8472 command_runner.go:139] > {"Repositories":{}}
	I1212 23:13:56.029505    8472 ssh_runner.go:195] Run: which lz4
	I1212 23:13:56.035359    8472 command_runner.go:130] > /usr/bin/lz4
	I1212 23:13:56.035359    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:13:56.046382    8472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:13:56.052856    8472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:13:56.052856    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 23:13:58.736125    8472 docker.go:635] Took 2.700536 seconds to copy over tarball
	I1212 23:13:58.753146    8472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:14:08.022919    8472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2697318s)
	I1212 23:14:08.022919    8472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:14:08.095190    8472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 23:14:08.111721    8472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 23:14:08.111721    8472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 23:14:08.157625    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:14:08.340167    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:14:10.676687    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3364436s)
	I1212 23:14:10.688217    8472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 23:14:10.713622    8472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 23:14:10.713688    8472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 23:14:10.713688    8472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:10.713884    8472 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 23:14:10.713884    8472 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:14:10.725093    8472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 23:14:10.761269    8472 command_runner.go:130] > cgroupfs
	I1212 23:14:10.761441    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:10.761635    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:10.761699    8472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:14:10.761699    8472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.51.245 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-392000 NodeName:multinode-392000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.51.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.51.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:14:10.761920    8472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.51.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-392000"
	  kubeletExtraArgs:
	    node-ip: 172.30.51.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.51.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:14:10.762050    8472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-392000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.51.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:14:10.779262    8472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:14:10.794245    8472 command_runner.go:130] > kubeadm
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubectl
	I1212 23:14:10.794834    8472 command_runner.go:130] > kubelet
	I1212 23:14:10.794911    8472 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:14:10.809051    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:14:10.823032    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:14:10.848411    8472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:14:10.870951    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 23:14:10.911088    8472 ssh_runner.go:195] Run: grep 172.30.51.245	control-plane.minikube.internal$ /etc/hosts
	I1212 23:14:10.917196    8472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.51.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:14:10.933858    8472 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000 for IP: 172.30.51.245
	I1212 23:14:10.933934    8472 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:10.934858    8472 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I1212 23:14:10.935530    8472 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I1212 23:14:10.936524    8472 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key
	I1212 23:14:10.936810    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt with IP's: []
	I1212 23:14:11.093297    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt ...
	I1212 23:14:11.093297    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.crt: {Name:mk11a4d3835ab9ea840eb8ac6add84affb6c8dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.094980    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key ...
	I1212 23:14:11.094980    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\client.key: {Name:mk06fddcf6422638da0b31b4d428923c70703238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.095936    8472 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa
	I1212 23:14:11.096955    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa with IP's: [172.30.51.245 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:14:11.196952    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa ...
	I1212 23:14:11.197202    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa: {Name:mkdf435dcc8983bec1e572c7a448162db34b2756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.198846    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa ...
	I1212 23:14:11.198846    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa: {Name:mk41672c6a02cbb3382bef7d288d52f8f77ae5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.199921    8472 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt
	I1212 23:14:11.213239    8472 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key.2023d9fa -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key
	I1212 23:14:11.214508    8472 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key
	I1212 23:14:11.214661    8472 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt with IP's: []
	I1212 23:14:11.328325    8472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt ...
	I1212 23:14:11.328325    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt: {Name:mk6e1ad80e6dad066789266c677d39834bd11583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.330616    8472 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key ...
	I1212 23:14:11.330616    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key: {Name:mk3959079764fecf7ecbee13715f18146dcf3506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:11.332006    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:14:11.332144    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:14:11.332442    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:14:11.342046    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:14:11.342358    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:14:11.342600    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:14:11.342813    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:14:11.343009    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:14:11.343165    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem (1338 bytes)
	W1212 23:14:11.343825    8472 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816_empty.pem, impossibly tiny 0 bytes
	I1212 23:14:11.343825    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1212 23:14:11.344117    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1212 23:14:11.344381    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1212 23:14:11.344630    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1212 23:14:11.344862    8472 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem (1708 bytes)
	I1212 23:14:11.344862    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem -> /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.345574    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.345718    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:11.345852    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:14:11.386214    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:14:11.425674    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:14:11.464191    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:14:11.502474    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:14:11.538128    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:14:11.575129    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:14:11.613906    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:14:11.650659    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\13816.pem --> /usr/share/ca-certificates/13816.pem (1338 bytes)
	I1212 23:14:11.686706    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /usr/share/ca-certificates/138162.pem (1708 bytes)
	I1212 23:14:11.726349    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:14:11.762200    8472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:14:11.800421    8472 ssh_runner.go:195] Run: openssl version
	I1212 23:14:11.809841    8472 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:14:11.823469    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13816.pem && ln -fs /usr/share/ca-certificates/13816.pem /etc/ssl/certs/13816.pem"
	I1212 23:14:11.861330    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.867989    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:21 /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.882273    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13816.pem
	I1212 23:14:11.889871    8472 command_runner.go:130] > 51391683
	I1212 23:14:11.903385    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13816.pem /etc/ssl/certs/51391683.0"
	I1212 23:14:11.935310    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138162.pem && ln -fs /usr/share/ca-certificates/138162.pem /etc/ssl/certs/138162.pem"
	I1212 23:14:11.964261    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970426    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.970992    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:21 /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.982253    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138162.pem
	I1212 23:14:11.990140    8472 command_runner.go:130] > 3ec20f2e
	I1212 23:14:12.009886    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138162.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:14:12.038995    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:14:12.069702    8472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.076435    8472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.089604    8472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:14:12.096884    8472 command_runner.go:130] > b5213941
	I1212 23:14:12.110390    8472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:14:12.140395    8472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:14:12.146418    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:14:12.146418    8472 kubeadm.go:404] StartCluster: {Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:14:12.155995    8472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 23:14:12.194954    8472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:14:12.210497    8472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:14:12.223698    8472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:14:12.252003    8472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:14:12.266277    8472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266543    8472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:14:12.266717    8472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:14:12.516893    8472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.516947    8472 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:14:12.517226    8472 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:14:12.517226    8472 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:14:13.027121    8472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027121    8472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:14:13.027384    8472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027384    8472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:14:13.027545    8472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.027656    8472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:14:13.446026    8472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447343    8472 out.go:204]   - Generating certificates and keys ...
	I1212 23:14:13.446026    8472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:14:13.447732    8472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:14:13.447800    8472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:14:13.448160    8472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.448217    8472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:14:13.576197    8472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.576331    8472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:14:13.756341    8472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.756398    8472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:14:13.844910    8472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:13.844957    8472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:14:14.189004    8472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.189084    8472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:14:14.353924    8472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.353924    8472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:14:14.354351    8472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.354351    8472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.509618    8472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.509618    8472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:14:14.510200    8472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.510200    8472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-392000] and IPs [172.30.51.245 127.0.0.1 ::1]
	I1212 23:14:14.634812    8472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.634883    8472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:14:14.965686    8472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:14.965747    8472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:14:15.155790    8472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:14:15.155863    8472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:14:15.156194    8472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.156194    8472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:14:15.627970    8472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:15.628062    8472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:14:16.106269    8472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.106461    8472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:14:16.241202    8472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.241256    8472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:14:16.532306    8472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.532306    8472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:14:16.533302    8472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.533432    8472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:14:16.538562    8472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.538657    8472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:14:16.539723    8472 out.go:204]   - Booting up control plane ...
	I1212 23:14:16.539967    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.540045    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:14:16.541855    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.541855    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:14:16.543221    8472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.543286    8472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:14:16.570893    8472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.570998    8472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:14:16.572167    8472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572329    8472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:14:16.572476    8472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:14:16.572590    8472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:14:16.741649    8472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:16.741649    8472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:14:25.247209    8472 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247209    8472 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.504943 seconds
	I1212 23:14:25.247636    8472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.247636    8472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:14:25.274937    8472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.274937    8472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:14:25.809600    8472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.809600    8472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:14:25.810164    8472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:25.810216    8472 kubeadm.go:322] [mark-control-plane] Marking the node multinode-392000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:14:26.326643    8472 kubeadm.go:322] [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.327542    8472 out.go:204]   - Configuring RBAC rules ...
	I1212 23:14:26.326643    8472 command_runner.go:130] > [bootstrap-token] Using token: 25uq60.iet6b6wkpyiimnbc
	I1212 23:14:26.328018    8472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.328018    8472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:14:26.341522    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.341728    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:14:26.354025    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.354025    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:14:26.359843    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.359843    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:14:26.364553    8472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.364553    8472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:14:26.369249    8472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.369249    8472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:14:26.393459    8472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.393481    8472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:14:26.711238    8472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.711357    8472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:14:26.750599    8472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.750686    8472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:14:26.751909    8472 kubeadm.go:322] 
	I1212 23:14:26.752244    8472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752244    8472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:14:26.752424    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.752475    8472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:14:26.752475    8472 kubeadm.go:322] 
	I1212 23:14:26.753252    8472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753252    8472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:14:26.753309    8472 kubeadm.go:322] 
	I1212 23:14:26.753415    8472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:14:26.753445    8472 kubeadm.go:322] 
	I1212 23:14:26.753445    8472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:14:26.753445    8472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:14:26.753445    8472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.753445    8472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:14:26.754014    8472 kubeadm.go:322] 
	I1212 23:14:26.754183    8472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754220    8472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:14:26.754289    8472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:14:26.754289    8472 kubeadm.go:322] 
	I1212 23:14:26.754289    8472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754289    8472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.754820    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754820    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 \
	I1212 23:14:26.754878    8472 kubeadm.go:322] 	--control-plane 
	I1212 23:14:26.754917    8472 command_runner.go:130] > 	--control-plane 
	I1212 23:14:26.754917    8472 kubeadm.go:322] 
	I1212 23:14:26.754995    8472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:14:26.755080    8472 kubeadm.go:322] 
	I1212 23:14:26.755165    8472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 25uq60.iet6b6wkpyiimnbc \
	I1212 23:14:26.755165    8472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755165    8472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:149ee08a038921d860edcde7072b68d6580231d853c05e972e894f70ea572ed7 
	I1212 23:14:26.755707    8472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:14:26.755762    8472 cni.go:84] Creating CNI manager for ""
	I1212 23:14:26.755762    8472 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:14:26.756717    8472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:14:26.771363    8472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:14:26.781345    8472 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 23:14:26.781345    8472 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:14:26.781345    8472 command_runner.go:130] > Access: 2023-12-12 23:12:39.138849800 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] > Change: 2023-12-12 23:12:30.064000000 +0000
	I1212 23:14:26.781345    8472 command_runner.go:130] >  Birth: -
	I1212 23:14:26.781345    8472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:14:26.781345    8472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:14:26.831214    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:14:28.360489    8472 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:14:28.360489    8472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5292685s)
	I1212 23:14:28.360489    8472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:14:28.377434    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.378438    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-392000 minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.385676    8472 command_runner.go:130] > -16
	I1212 23:14:28.385745    8472 ops.go:34] apiserver oom_adj: -16
	I1212 23:14:28.554211    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:14:28.554334    8472 command_runner.go:130] > node/multinode-392000 labeled
	I1212 23:14:28.574988    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.698031    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:28.717179    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:28.830537    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.348608    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.461037    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:29.849506    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:29.957356    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.362625    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.472272    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:30.848396    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:30.953849    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.353576    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.462341    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:31.853090    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:31.967586    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.355892    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.469924    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:32.859728    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:32.962773    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.364239    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.470177    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:33.864784    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:33.968916    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.351439    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.459257    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:34.855142    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:34.992369    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.364118    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.480745    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:35.848471    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:35.981045    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.353504    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:36.474547    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:36.857811    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.009603    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.360939    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.541831    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:37.855360    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:37.978223    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.358089    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:38.550481    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:38.868761    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.022604    8472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:14:39.352440    8472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:14:39.596621    8472 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:14:39.596712    8472 command_runner.go:130] > default   0         0s
	I1212 23:14:39.596736    8472 kubeadm.go:1088] duration metric: took 11.2361966s to wait for elevateKubeSystemPrivileges.
	I1212 23:14:39.596811    8472 kubeadm.go:406] StartCluster complete in 27.450269s
	I1212 23:14:39.596862    8472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.597021    8472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.598694    8472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:14:39.600390    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:14:39.600697    8472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:14:39.600890    8472 addons.go:69] Setting storage-provisioner=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:69] Setting default-storageclass=true in profile "multinode-392000"
	I1212 23:14:39.600953    8472 addons.go:231] Setting addon storage-provisioner=true in "multinode-392000"
	I1212 23:14:39.601014    8472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-392000"
	I1212 23:14:39.601153    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:39.601286    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:39.602024    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.602448    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:39.615520    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.616537    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.618133    8472 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:14:39.618679    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.618746    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.618746    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.618746    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.632969    8472 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1212 23:14:39.632969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Audit-Id: 48d468c3-d2b5-4ebf-8a31-5cfcaaf2e038
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.633400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.633400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.633475    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.633529    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.633615    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634237    8472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"382","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.634414    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.634442    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.634442    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:39.634488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.647166    8472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:14:39.647166    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.647166    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Audit-Id: 1d18df1e-467b-45b4-8fd3-f1be9c0eb077
	I1212 23:14:39.647166    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.647166    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.647166    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:14:39.647166    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.647166    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.647166    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.650190    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:39.650593    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.650593    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.650682    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Content-Length: 291
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.650682    8472 round_trippers.go:580]     Audit-Id: 257b2ee0-65f9-4fbe-a3e6-2b26b38e4e97
	I1212 23:14:39.650746    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.650746    8472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"58dc3b94-4b05-4a1e-86e3-73cdde134480","resourceVersion":"384","creationTimestamp":"2023-12-12T23:14:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1212 23:14:39.650879    8472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-392000" context rescaled to 1 replicas
	I1212 23:14:39.650983    8472 start.go:223] Will wait 6m0s for node &{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 23:14:39.652101    8472 out.go:177] * Verifying Kubernetes components...
	I1212 23:14:39.667782    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:39.958848    8472 command_runner.go:130] > apiVersion: v1
	I1212 23:14:39.958848    8472 command_runner.go:130] > data:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   Corefile: |
	I1212 23:14:39.958848    8472 command_runner.go:130] >     .:53 {
	I1212 23:14:39.958848    8472 command_runner.go:130] >         errors
	I1212 23:14:39.958848    8472 command_runner.go:130] >         health {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            lameduck 5s
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         ready
	I1212 23:14:39.958848    8472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            pods insecure
	I1212 23:14:39.958848    8472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:14:39.958848    8472 command_runner.go:130] >            ttl 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         prometheus :9153
	I1212 23:14:39.958848    8472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:14:39.958848    8472 command_runner.go:130] >            max_concurrent 1000
	I1212 23:14:39.958848    8472 command_runner.go:130] >         }
	I1212 23:14:39.958848    8472 command_runner.go:130] >         cache 30
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loop
	I1212 23:14:39.958848    8472 command_runner.go:130] >         reload
	I1212 23:14:39.958848    8472 command_runner.go:130] >         loadbalance
	I1212 23:14:39.958848    8472 command_runner.go:130] >     }
	I1212 23:14:39.958848    8472 command_runner.go:130] > kind: ConfigMap
	I1212 23:14:39.958848    8472 command_runner.go:130] > metadata:
	I1212 23:14:39.958848    8472 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:14:26Z"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   name: coredns
	I1212 23:14:39.958848    8472 command_runner.go:130] >   namespace: kube-system
	I1212 23:14:39.958848    8472 command_runner.go:130] >   resourceVersion: "257"
	I1212 23:14:39.958848    8472 command_runner.go:130] >   uid: 7f397c04-a5c3-4364-9f10-d28458f5d6c8
	I1212 23:14:39.959540    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:14:39.961001    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:39.962156    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:39.963642    8472 node_ready.go:35] waiting up to 6m0s for node "multinode-392000" to be "Ready" ...
	I1212 23:14:39.963798    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.963914    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.963987    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.963987    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.969659    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:39.969659    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.969659    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Audit-Id: ed4f4991-8208-4d64-8919-42fbdb031b1b
	I1212 23:14:39.969659    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.970862    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:39.972406    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:39.972406    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:39.972643    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:39.972643    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:39.974394    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:39.975312    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Audit-Id: 8a9ed035-646e-4f38-b110-fe61c0dc496f
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:39.975312    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:39.975312    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:39.975401    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:39 GMT
	I1212 23:14:39.975946    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.488957    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.488957    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.488957    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.488957    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.492969    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:40.492969    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Audit-Id: d903c580-8adc-4d96-8f5f-d51f731bc93c
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.492969    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.492969    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.492969    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:40.668167    8472 command_runner.go:130] > configmap/coredns replaced
	I1212 23:14:40.669157    8472 start.go:929] {"host.minikube.internal": 172.30.48.1} host record injected into CoreDNS's ConfigMap
	I1212 23:14:40.981876    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:40.981950    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:40.982011    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:40.982011    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:40.991394    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:40.991394    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Audit-Id: ab5b6285-e3ff-4e6f-b61b-a20df0759ba6
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:40.991394    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:40.991394    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:40 GMT
	I1212 23:14:40.991394    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.489914    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.490030    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.490030    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.490030    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.494868    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:41.495917    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.496035    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.496123    8472 round_trippers.go:580]     Audit-Id: 1e563910-36f9-4968-810e-a0bd4b1bd52f
	I1212 23:14:41.496167    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.496302    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.496696    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.902667    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:41.903563    8472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:14:41.903563    8472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 23:14:41.904285    8472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:41.904285    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:14:41.904285    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.905110    8472 kapi.go:59] client config for multinode-392000: &rest.Config{Host:"https://172.30.51.245:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-392000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23a9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:14:41.906532    8472 addons.go:231] Setting addon default-storageclass=true in "multinode-392000"
	I1212 23:14:41.906532    8472 host.go:66] Checking if "multinode-392000" exists ...
	I1212 23:14:41.907304    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:41.980106    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:41.980486    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:41.980486    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:41.980486    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:41.985786    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:41.985786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:41.985786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:41 GMT
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Audit-Id: 08bb64de-dde1-4fa6-8913-0f6b5de0cf24
	I1212 23:14:41.985786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:41.986033    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:41.986033    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:41.986463    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:41.987219    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:42.486548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.486653    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.486653    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.486653    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.496333    8472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 23:14:42.496447    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.496447    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.496524    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.496524    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Audit-Id: 4ab1601a-d766-4e5d-a976-df70bc7f3fc6
	I1212 23:14:42.496582    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.496654    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.497705    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:42.979753    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:42.979865    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:42.979865    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:42.979865    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:42.984301    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:42.984301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Audit-Id: d84e4388-d133-418c-ad44-eb666ea80368
	I1212 23:14:42.984301    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:42.984627    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:42.984678    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:42.984771    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:42 GMT
	I1212 23:14:42.985134    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.487286    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.487436    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.487436    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.487436    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.493059    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.493240    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.493240    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.493331    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.493331    8472 round_trippers.go:580]     Audit-Id: ff7197c8-30b8-4b58-8cc1-df9d319b0dbf
	I1212 23:14:43.493700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:43.979059    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:43.979132    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:43.979132    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:43.979132    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:43.984231    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:43.984231    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Audit-Id: a3b2e6ef-d4d8-4f3e-b9c5-6d5c3c21bbd3
	I1212 23:14:43.984231    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:43.984345    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:43.984345    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:43.984416    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:43.984416    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:43 GMT
	I1212 23:14:43.984602    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.095027    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.095183    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.095249    8472 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:44.095249    8472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:14:44.095249    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000 ).state
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:44.120050    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:44.120131    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:44.483249    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.483332    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.483332    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.483332    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.487173    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.488191    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.488191    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.488191    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.488335    8472 round_trippers.go:580]     Audit-Id: 266b4ffc-e86f-4f1b-b463-36bca9136481
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.488372    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.488839    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:44.489392    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:44.989331    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:44.989428    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:44.989428    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:44.989428    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:44.992917    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:44.993400    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Audit-Id: d75583c4-9a74-49b4-bbf3-b56138886974
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:44.993400    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:44.993400    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:44 GMT
	I1212 23:14:44.993757    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.481494    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.481494    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.481494    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.481778    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.487002    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:45.487002    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Audit-Id: 34cccb14-bef0-4d33-bac4-e822ad4bf7d0
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.487084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.487084    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.487387    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:45.990444    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:45.990444    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:45.990444    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:45.990444    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:45.994459    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:45.995453    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Audit-Id: 75a4ef11-ddaa-4f93-8672-e7309c071368
	I1212 23:14:45.995453    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:45.995553    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:45.995597    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:45.995597    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:45 GMT
	I1212 23:14:45.996008    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.478860    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.478860    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.478860    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.478860    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.482906    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:46.482906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.482906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.484021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.484021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Audit-Id: f2e453d5-50bc-4639-bda1-a5a03905d0ad
	I1212 23:14:46.484057    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:14:46.484906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.484906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.485283    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000 ).networkadapters[0]).ipaddresses[0]
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:46.902984    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:46.902984    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:46.980436    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:46.980521    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:46.980521    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:46.980521    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:46.984189    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:46.984189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Audit-Id: 7c159fbf-c0d0-41ed-a33b-761beff59770
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:46.984189    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:46.984333    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:46.984333    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:46 GMT
	I1212 23:14:46.984744    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:46.985579    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:47.051355    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:14:47.484303    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.484303    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.484303    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.484303    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.488895    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:47.488895    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Audit-Id: 28e8c341-cf42-49da-a69a-ab79f001048f
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.488895    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.488895    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.489240    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:47.868848    8472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:14:47.868848    8472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:14:47.868942    8472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:14:47.868942    8472 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:14:47.990911    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:47.991083    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:47.991083    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:47.991083    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:47.996324    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:47.996324    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:47.996324    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:47 GMT
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Audit-Id: 898f23b9-63a4-46cb-8539-9e21fae3e688
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:47.996324    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:47.997714    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.480781    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.480862    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.480862    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.480862    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.484374    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.485189    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.485189    8472 round_trippers.go:580]     Audit-Id: 1a3b1ec7-5eb6-4bb8-b344-5426a5516c00
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.485269    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.485269    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.485621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.989623    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:48.989623    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:48.989623    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:48.989698    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:48.992877    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:48.993906    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:48 GMT
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Audit-Id: 975a7df8-210f-4288-bec3-86537d1ea98a
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:48.993906    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:48.993906    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:48.993906    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:48.993906    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:49.083047    8472 main.go:141] libmachine: [stdout =====>] : 172.30.51.245
	
	I1212 23:14:49.083318    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:14:49.083618    8472 sshutil.go:53] new ssh client: &{IP:172.30.51.245 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000\id_rsa Username:docker}
	I1212 23:14:49.220179    8472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:14:49.478362    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.478404    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.478488    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.478488    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.486550    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:49.486550    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Audit-Id: 886c4e27-fc97-4d2e-be30-23c8528e1331
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.486550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.486550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.487579    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:49.633908    8472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:14:49.634368    8472 round_trippers.go:463] GET https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:14:49.634438    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.634438    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.634438    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.638301    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.638301    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Length: 1273
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Audit-Id: 478d6e3c-e333-45bd-ad37-ff39e2c109a4
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.638518    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.638518    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.638613    8472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:14:49.639458    8472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.639570    8472 round_trippers.go:463] PUT https://172.30.51.245:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:14:49.639570    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.639570    8472 round_trippers.go:473]     Content-Type: application/json
	I1212 23:14:49.639632    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.643499    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:49.643499    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.643499    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Length: 1220
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Audit-Id: a15a2fa8-ae37-4d33-8ee0-c9808f9a288d
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.644178    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.644178    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.644178    8472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"616e5979-a5cc-4764-bb8c-8e7039e4b18a","resourceVersion":"414","creationTimestamp":"2023-12-12T23:14:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:14:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:14:49.682970    8472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:14:49.684353    8472 addons.go:502] enable addons completed in 10.0836106s: enabled=[storage-provisioner default-storageclass]
	I1212 23:14:49.980729    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:49.980729    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:49.980729    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:49.980729    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:49.984838    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:49.985229    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Audit-Id: ce24cfdd-3acb-4830-ac23-4db47133d6a3
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:49.985229    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:49.985323    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:49.985323    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:49 GMT
	I1212 23:14:49.985624    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.483312    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.483375    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.483375    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.483375    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.488227    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:50.488227    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.488227    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Audit-Id: 6991df1a-7c65-4f8c-aa6d-8a4b07664792
	I1212 23:14:50.488227    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.488335    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.488445    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:50.981018    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:50.981153    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:50.981153    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:50.981153    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:50.986420    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:50.987021    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:50.987021    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:50 GMT
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Audit-Id: 05d03ac9-757b-47ae-892d-06c9975e0504
	I1212 23:14:50.987021    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:50.987288    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.481784    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.481935    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.481935    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.481935    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.487331    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:51.487741    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Audit-Id: ea8e810d-7571-41b8-a29c-f7b350aa7e48
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.487741    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.487741    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.488700    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:51.489229    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:51.980060    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:51.980060    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:51.980060    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:51.980060    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:51.986763    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:51.987222    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Audit-Id: e66e1130-e80e-4e5c-a2df-c6f097d5374f
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:51.987303    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:51.987303    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:51 GMT
	I1212 23:14:51.987303    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.487530    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.487615    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.487615    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.487615    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.491306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.491306    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.491306    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Audit-Id: 6d39f79a-048a-4380-88c0-1538a97cf6cb
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.491306    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.492158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:52.988203    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:52.988350    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:52.988350    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:52.988350    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:52.991874    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:52.991874    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Audit-Id: b82dc74d-b44e-41ac-8e64-37803addc6c1
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:52.991874    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:52.991874    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:52.992376    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:52.992376    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:52 GMT
	I1212 23:14:52.992866    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.487128    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.487128    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.487128    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.487128    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.490404    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.490404    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Audit-Id: fcdaf883-7338-4102-abda-846f7169bb26
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.490404    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.490404    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.491349    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:53.491797    8472 node_ready.go:58] node "multinode-392000" has status "Ready":"False"
	I1212 23:14:53.988709    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:53.988958    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:53.988958    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:53.988958    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:53.992351    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:53.992351    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Audit-Id: c1836498-4d32-49e6-a01e-d2011a223374
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:53.992796    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:53.992796    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:53.992872    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:53.992872    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:53 GMT
	I1212 23:14:53.993179    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.484052    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.484152    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.484152    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.484152    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.487262    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:54.487786    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Audit-Id: f53da0c3-a775-4443-aabf-f7c4222d5d96
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.487786    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.487786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.488171    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:54.984021    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:54.984123    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:54.984123    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:54.984123    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:54.989880    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:54.989880    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Audit-Id: c5095c7c-a76c-429e-af60-764abe494287
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:54.989880    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:54.989880    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:54 GMT
	I1212 23:14:54.991622    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.485045    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.485181    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.485181    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.485181    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.489762    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:55.489762    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Audit-Id: 4f7c8477-81de-4b39-8164-bf264c826669
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.489762    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.489762    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.490338    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.490338    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.490621    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"335","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1212 23:14:55.987165    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:55.987255    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.987255    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.987255    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.990960    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:55.991209    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.991209    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Audit-Id: 730af8dd-1c79-432a-ac28-d735f45d211a
	I1212 23:14:55.991209    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.991209    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:55.991993    8472 node_ready.go:49] node "multinode-392000" has status "Ready":"True"
	I1212 23:14:55.991993    8472 node_ready.go:38] duration metric: took 16.0282441s waiting for node "multinode-392000" to be "Ready" ...
	I1212 23:14:55.991993    8472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:55.992424    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:55.992451    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:55.992451    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:55.992451    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:55.997828    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:55.997828    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Audit-Id: 52d7810c-f76c-4c45-9178-39943c5e611e
	I1212 23:14:55.997828    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:55.998550    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:55.998550    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:55 GMT
	I1212 23:14:56.000563    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1212 23:14:56.005713    8472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:56.005713    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.005713    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.005713    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.005713    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.009293    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:56.009293    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.009293    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Audit-Id: 349c895b-3263-4592-bf5f-cc4fce22f4db
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.009641    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.009732    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.009961    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.010548    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.010601    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.010601    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.010670    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.013302    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.013302    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Audit-Id: 14638822-3485-4ab6-af72-f2d254050772
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.013994    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.013994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.014102    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.014102    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.014313    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.014948    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.014948    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.014948    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.014948    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.017876    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.017876    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Audit-Id: e61611d3-94ea-464c-acce-2a665e01fb85
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.017960    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.018073    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.018159    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.018325    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.018970    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.019023    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.019023    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.019078    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.020855    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:56.020855    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Audit-Id: d723e84b-6004-4853-8f4c-e9de464efdde
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.021714    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.021772    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.021800    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.021800    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.021800    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:56.536622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:56.536622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.536622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.536622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.540896    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:56.540896    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.541442    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Audit-Id: ea416197-cb64-40af-bf73-38fd2e37a823
	I1212 23:14:56.541442    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.541534    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.541534    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.541670    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:56.542439    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:56.542559    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:56.542559    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:56.542559    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:56.544902    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:56.544902    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:56.544902    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:56 GMT
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Audit-Id: 82379cb0-03c3-4187-8a08-c95f8c2d434e
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:56.545742    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:56.546107    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.027636    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.027717    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.027791    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.027791    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.030425    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.030425    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.030425    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Audit-Id: 856b15b9-b6fa-489d-9a24-eaaf1afc5bd5
	I1212 23:14:57.030425    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.031434    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.032501    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.032606    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.032658    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.032658    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.035158    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:57.035158    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Audit-Id: 2f81449f-83b9-4c66-bc2e-17ac17b48322
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.035158    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.035158    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.035158    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:57.534454    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:57.534587    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.534587    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.534587    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.541021    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:57.541365    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.541365    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.541365    8472 round_trippers.go:580]     Audit-Id: bb822741-a39c-491c-8b27-f5dc32b9ac7d
	I1212 23:14:57.541943    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"429","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:14:57.542190    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:57.542190    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:57.542190    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:57.542190    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:57.545257    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:57.545257    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:57.545896    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:57 GMT
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Audit-Id: 27629acd-42f2-4083-aba9-c01ef165283c
	I1212 23:14:57.546009    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:57.546084    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:57.546084    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:57.546180    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:57.546712    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"424","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1212 23:14:58.023516    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-4xn8h
	I1212 23:14:58.023822    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.023880    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.023880    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.027764    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.028057    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.028057    8472 round_trippers.go:580]     Audit-Id: 1522c4b2-abdb-44ed-9ac8-0a151cbe371e
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.028106    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.028106    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.028173    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.028494    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1212 23:14:58.029540    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.029617    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.029617    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.029617    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.032006    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.033008    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Audit-Id: 5f970653-a2f7-4b0e-ab8b-5146ee17b7e9
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.033046    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.033046    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.033115    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.033423    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.034124    8472 pod_ready.go:92] pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.034124    8472 pod_ready.go:81] duration metric: took 2.0284013s waiting for pod "coredns-5dd5756b68-4xn8h" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034124    8472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.034268    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-392000
	I1212 23:14:58.034268    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.034268    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.034268    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.040664    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:58.040664    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.040664    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.040664    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.040786    8472 round_trippers.go:580]     Audit-Id: 8ec23e55-3f6f-45bb-9dd5-58fa0a89221a
	I1212 23:14:58.041172    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-392000","namespace":"kube-system","uid":"9ba15872-d011-4389-bbbd-cda3bb377f30","resourceVersion":"299","creationTimestamp":"2023-12-12T23:14:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.51.245:2379","kubernetes.io/config.hash":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.mirror":"dc8336ef7aecf1b56d0097c8e4931803","kubernetes.io/config.seen":"2023-12-12T23:14:17.439033677Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1212 23:14:58.041719    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.041719    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.041719    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.041719    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.045328    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.045328    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Audit-Id: 9c560ca1-5f98-49b8-ae36-71e9aa076f5e
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.045328    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.045328    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.045328    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.045328    8472 pod_ready.go:92] pod "etcd-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.045328    8472 pod_ready.go:81] duration metric: took 11.2037ms waiting for pod "etcd-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.045328    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-392000
	I1212 23:14:58.046330    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.046330    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.046330    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.048649    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.048649    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Audit-Id: ebed4532-17cb-49da-a702-3de6ff899b2d
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.048649    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.048649    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.048649    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-392000","namespace":"kube-system","uid":"4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75","resourceVersion":"330","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.51.245:8443","kubernetes.io/config.hash":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.mirror":"a728ade276b580d5a5541017805cb6e1","kubernetes.io/config.seen":"2023-12-12T23:14:26.871565960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1212 23:14:58.048649    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.048649    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.048649    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.048649    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.052979    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.052979    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.052979    8472 round_trippers.go:580]     Audit-Id: ba4e3ef6-8436-406b-be77-63a9e785adac
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.053599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.053599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.053729    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.053941    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.054233    8472 pod_ready.go:92] pod "kube-apiserver-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.054233    8472 pod_ready.go:81] duration metric: took 8.9055ms waiting for pod "kube-apiserver-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.054233    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-392000
	I1212 23:14:58.054233    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.054233    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.054233    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.057795    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.057795    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.057795    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.057795    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Audit-Id: 23c9283e-f0e0-44ab-b1c7-820bcafbc897
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.058055    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.058055    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.058481    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-392000","namespace":"kube-system","uid":"60a15f93-6e63-4c2e-a54e-7e6a2275127c","resourceVersion":"296","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.mirror":"870815ec54f710f03be95799f2c404e9","kubernetes.io/config.seen":"2023-12-12T23:14:26.871570660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1212 23:14:58.059284    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.059347    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.059347    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.059347    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.067599    8472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:14:58.067599    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Audit-Id: cd4581bf-1000-4906-812b-59a573920066
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.067599    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.067599    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.068544    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.068544    8472 pod_ready.go:92] pod "kube-controller-manager-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.068544    8472 pod_ready.go:81] duration metric: took 14.3106ms waiting for pod "kube-controller-manager-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.068544    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.194675    8472 request.go:629] Waited for 125.8741ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55nr8
	I1212 23:14:58.194754    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.194825    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.194825    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.198109    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.198109    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Audit-Id: 5a8d39b0-49cf-41c3-9e07-80cfc7e1b033
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.198109    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.198994    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.198994    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.199312    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55nr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"76f72515-2132-4473-883e-2846ebaca62e","resourceVersion":"403","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"932f2a4e-5c28-4c7c-8885-1298fbe1d167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932f2a4e-5c28-4c7c-8885-1298fbe1d167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1212 23:14:58.398673    8472 request.go:629] Waited for 198.4474ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.398787    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.398787    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.398966    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.401717    8472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:14:58.401717    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.401717    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.402644    8472 round_trippers.go:580]     Audit-Id: b728eb3e-d54c-43cb-90ce-e7b356f69ae4
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.402725    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.402725    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.402828    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.403583    8472 pod_ready.go:92] pod "kube-proxy-55nr8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.403583    8472 pod_ready.go:81] duration metric: took 335.0375ms waiting for pod "kube-proxy-55nr8" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.403583    8472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.601380    8472 request.go:629] Waited for 197.6478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-392000
	I1212 23:14:58.601681    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.601681    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.601681    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.605957    8472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:14:58.606145    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Audit-Id: 02f9b40f-c4e0-4c98-bcbc-9913ccb796e7
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.606145    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.606145    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.606409    8472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-392000","namespace":"kube-system","uid":"1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2","resourceVersion":"295","creationTimestamp":"2023-12-12T23:14:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.mirror":"5575d46497071668d59c6aaa70135fd4","kubernetes.io/config.seen":"2023-12-12T23:14:26.871571660Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1212 23:14:58.789396    8472 request.go:629] Waited for 182.2618ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789688    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes/multinode-392000
	I1212 23:14:58.789779    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.789779    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.789828    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.793340    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:58.794060    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.794126    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Audit-Id: e123c53f-d439-4d57-931f-9f875d26f581
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.794126    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.794126    8472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T23:14:22Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1212 23:14:58.795030    8472 pod_ready.go:92] pod "kube-scheduler-multinode-392000" in "kube-system" namespace has status "Ready":"True"
	I1212 23:14:58.795030    8472 pod_ready.go:81] duration metric: took 391.4452ms waiting for pod "kube-scheduler-multinode-392000" in "kube-system" namespace to be "Ready" ...
	I1212 23:14:58.795030    8472 pod_ready.go:38] duration metric: took 2.8027177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:14:58.795030    8472 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:14:58.810986    8472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:14:58.830637    8472 command_runner.go:130] > 2099
	I1212 23:14:58.830637    8472 api_server.go:72] duration metric: took 19.1794438s to wait for apiserver process to appear ...
	I1212 23:14:58.830637    8472 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:14:58.830637    8472 api_server.go:253] Checking apiserver healthz at https://172.30.51.245:8443/healthz ...
	I1212 23:14:58.838776    8472 api_server.go:279] https://172.30.51.245:8443/healthz returned 200:
	ok
	I1212 23:14:58.839718    8472 round_trippers.go:463] GET https://172.30.51.245:8443/version
	I1212 23:14:58.839718    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.839718    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.839718    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.841290    8472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:14:58.841290    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.841290    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.841730    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Content-Length: 264
	I1212 23:14:58.841730    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.841836    8472 round_trippers.go:580]     Audit-Id: 46b8d46d-380f-4f82-941f-34d5ff7fc981
	I1212 23:14:58.841875    8472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:14:58.841973    8472 api_server.go:141] control plane version: v1.28.4
	I1212 23:14:58.842105    8472 api_server.go:131] duration metric: took 11.468ms to wait for apiserver health ...
	I1212 23:14:58.842105    8472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:14:58.990794    8472 request.go:629] Waited for 148.3275ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990949    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:58.990993    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:58.990993    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:58.990993    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:58.996780    8472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:14:58.996780    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:58.996780    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:58.996780    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:58 GMT
	I1212 23:14:58.997050    8472 round_trippers.go:580]     Audit-Id: ef9a1c82-2d0d-4fd5-aef9-3720896905c4
	I1212 23:14:58.998795    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.002276    8472 system_pods.go:59] 8 kube-system pods found
	I1212 23:14:59.002323    8472 system_pods.go:61] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.002323    8472 system_pods.go:61] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.002414    8472 system_pods.go:61] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.002414    8472 system_pods.go:74] duration metric: took 160.3082ms to wait for pod list to return data ...
	I1212 23:14:59.002414    8472 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:14:59.195077    8472 request.go:629] Waited for 192.5258ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:14:59.195622    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.195622    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.195622    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.199306    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.199787    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Audit-Id: d11e054d-44f1-4ba9-98c1-9a69160ffdff
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.199787    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.199787    8472 round_trippers.go:580]     Content-Length: 261
	I1212 23:14:59.199787    8472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7c305be4-9460-4ff1-a283-85a13dcb1cde","resourceVersion":"367","creationTimestamp":"2023-12-12T23:14:39Z"}}]}
	I1212 23:14:59.199787    8472 default_sa.go:45] found service account: "default"
	I1212 23:14:59.199787    8472 default_sa.go:55] duration metric: took 197.3719ms for default service account to be created ...
	I1212 23:14:59.199787    8472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:14:59.396801    8472 request.go:629] Waited for 196.4246ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/namespaces/kube-system/pods
	I1212 23:14:59.397321    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.397321    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.397321    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.400691    8472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:14:59.400691    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.400691    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Audit-Id: 70f11694-1074-4f5f-b23d-4a24efbaa455
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.400691    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.403399    8472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-4xn8h","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17b97a16-eb8e-4bb4-a224-baa68e4c5efe","resourceVersion":"443","creationTimestamp":"2023-12-12T23:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e0a25d-9bfc-4a53-8aac-7d8f107b29eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1212 23:14:59.408656    8472 system_pods.go:86] 8 kube-system pods found
	I1212 23:14:59.409213    8472 system_pods.go:89] "coredns-5dd5756b68-4xn8h" [17b97a16-eb8e-4bb4-a224-baa68e4c5efe] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "etcd-multinode-392000" [9ba15872-d011-4389-bbbd-cda3bb377f30] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kindnet-bpcxd" [efa60598-6118-442f-a5ba-bab75ebbeb2a] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-apiserver-multinode-392000" [4d49db4f-f1dd-46b3-b0bf-f66f2ea75a75] Running
	I1212 23:14:59.409213    8472 system_pods.go:89] "kube-controller-manager-multinode-392000" [60a15f93-6e63-4c2e-a54e-7e6a2275127c] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-proxy-55nr8" [76f72515-2132-4473-883e-2846ebaca62e] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "kube-scheduler-multinode-392000" [1c53fbc3-4f54-4ff5-9f1b-dbfb5a76bbe2] Running
	I1212 23:14:59.409293    8472 system_pods.go:89] "storage-provisioner" [0a8f47d8-719b-4927-a11d-e796c2d01064] Running
	I1212 23:14:59.409293    8472 system_pods.go:126] duration metric: took 209.505ms to wait for k8s-apps to be running ...
	I1212 23:14:59.409358    8472 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:14:59.423142    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:14:59.445203    8472 system_svc.go:56] duration metric: took 35.9106ms WaitForService to wait for kubelet.
	I1212 23:14:59.445871    8472 kubeadm.go:581] duration metric: took 19.7946755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:14:59.445871    8472 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:14:59.598916    8472 request.go:629] Waited for 152.7318ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:463] GET https://172.30.51.245:8443/api/v1/nodes
	I1212 23:14:59.599012    8472 round_trippers.go:469] Request Headers:
	I1212 23:14:59.599012    8472 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:14:59.599012    8472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1212 23:14:59.605849    8472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 23:14:59.605849    8472 round_trippers.go:577] Response Headers:
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Audit-Id: 36bbb4b8-2cd2-4825-9a0a-f9d3f7de5388
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Content-Type: application/json
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3d7cea25-9365-4daa-a72b-25f00f1c7aae
	I1212 23:14:59.605849    8472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 508b04c0-3b98-412c-9de3-6e57fb37ae34
	I1212 23:14:59.605849    8472 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:14:59 GMT
	I1212 23:14:59.605849    8472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-392000","uid":"2ba16b38-ac55-4b74-9d64-bf0746eeacc3","resourceVersion":"449","creationTimestamp":"2023-12-12T23:14:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-392000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-392000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_14_28_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I1212 23:14:59.606649    8472 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:14:59.606649    8472 node_conditions.go:123] node cpu capacity is 2
	I1212 23:14:59.606649    8472 node_conditions.go:105] duration metric: took 160.7768ms to run NodePressure ...
	I1212 23:14:59.606649    8472 start.go:228] waiting for startup goroutines ...
	I1212 23:14:59.606649    8472 start.go:233] waiting for cluster config update ...
	I1212 23:14:59.606649    8472 start.go:242] writing updated cluster config ...
	I1212 23:14:59.609246    8472 out.go:177] 
	I1212 23:14:59.621487    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:14:59.622710    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.625530    8472 out.go:177] * Starting worker node multinode-392000-m02 in cluster multinode-392000
	I1212 23:14:59.626570    8472 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 23:14:59.626570    8472 cache.go:56] Caching tarball of preloaded images
	I1212 23:14:59.627622    8472 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 23:14:59.627622    8472 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 23:14:59.627622    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:14:59.635421    8472 start.go:365] acquiring machines lock for multinode-392000-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:14:59.636404    8472 start.go:369] acquired machines lock for "multinode-392000-m02" in 983.5µs
	I1212 23:14:59.636641    8472 start.go:93] Provisioning new machine with config: &{Name:multinode-392000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-392000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.30.51.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1212 23:14:59.636641    8472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1212 23:14:59.637295    8472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:14:59.637925    8472 start.go:159] libmachine.API.Create for "multinode-392000" (driver="hyperv")
	I1212 23:14:59.637925    8472 client.go:168] LocalClient.Create starting
	I1212 23:14:59.637925    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I1212 23:14:59.638507    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.638593    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.638845    8472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I1212 23:14:59.639076    8472 main.go:141] libmachine: Decoding PEM data...
	I1212 23:14:59.639124    8472 main.go:141] libmachine: Parsing certificate...
	I1212 23:14:59.639207    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1212 23:15:01.516858    8472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:01.517099    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stdout =====>] : False
	
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:03.276939    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:04.771547    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:04.771630    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:04.771709    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:08.419999    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:08.420189    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:08.422680    8472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:15:08.872411    8472 main.go:141] libmachine: Creating SSH key...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: Creating VM...
	I1212 23:15:09.214904    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1212 23:15:12.102765    8472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1212 23:15:12.102977    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:12.103063    8472 main.go:141] libmachine: Using switch "Default Switch"
	I1212 23:15:12.103063    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1212 23:15:13.864474    8472 main.go:141] libmachine: [stdout =====>] : True
	
	I1212 23:15:13.864777    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:13.864985    8472 main.go:141] libmachine: Creating VHD
	I1212 23:15:13.864985    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C3CD4AE2-4C48-4AEE-B99B-DEEF0B4769F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1212 23:15:17.628988    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:17.628988    8472 main.go:141] libmachine: Writing magic tar header
	I1212 23:15:17.629139    8472 main.go:141] libmachine: Writing SSH key tar header
	I1212 23:15:17.638018    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:20.769227    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:20.769313    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd' -SizeBytes 20000MB
	I1212 23:15:23.326059    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:23.326281    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:23.326443    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-392000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:26.827330    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-392000-m02 -DynamicMemoryEnabled $false
	I1212 23:15:29.047581    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:29.047983    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:29.048174    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-392000-m02 -Count 2
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:31.216851    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:31.217251    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\boot2docker.iso'
	I1212 23:15:33.748082    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:33.748399    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-392000-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\disk.vhd'
	I1212 23:15:36.359294    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:36.359564    8472 main.go:141] libmachine: Starting VM...
	I1212 23:15:36.359738    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-392000-m02
	I1212 23:15:39.227776    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:39.227906    8472 main.go:141] libmachine: Waiting for host to start...
	I1212 23:15:39.228071    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:41.509631    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:41.510037    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:44.031565    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:44.031787    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:45.038541    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:47.239266    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:49.774015    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:49.774142    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:50.775721    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:52.997182    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:15:55.502870    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:15:55.503039    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:56.518873    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:15:58.738659    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:15:58.738736    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:15:58.738844    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stdout =====>] : 
	I1212 23:16:01.265330    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:02.269014    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:04.506810    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:04.506866    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:04.506903    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:07.087487    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:07.087855    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:07.088033    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:09.243954    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:09.244063    8472 machine.go:88] provisioning docker machine ...
	I1212 23:16:09.244248    8472 buildroot.go:166] provisioning hostname "multinode-392000-m02"
	I1212 23:16:09.244333    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:11.421301    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:11.421631    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:13.977447    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:13.977572    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:13.983166    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:13.992249    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:13.992249    8472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-392000-m02 && echo "multinode-392000-m02" | sudo tee /etc/hostname
	I1212 23:16:14.163299    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-392000-m02
	
	I1212 23:16:14.163350    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:16.307595    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:16.308006    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:18.830534    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:18.839723    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:18.840482    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:18.840482    8472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-392000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-392000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-392000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:18.989326    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:18.990311    8472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1212 23:16:18.990311    8472 buildroot.go:174] setting up certificates
	I1212 23:16:18.990311    8472 provision.go:83] configureAuth start
	I1212 23:16:18.990453    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:21.069453    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:21.069665    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:23.556570    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:23.556862    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:25.694020    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:28.222549    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:28.222832    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:28.222832    8472 provision.go:138] copyHostCerts
	I1212 23:16:28.223026    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I1212 23:16:28.223356    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1212 23:16:28.223356    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1212 23:16:28.223923    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1212 23:16:28.224665    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I1212 23:16:28.225195    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1212 23:16:28.225367    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1212 23:16:28.225569    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1212 23:16:28.226891    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I1212 23:16:28.227287    8472 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1212 23:16:28.227287    8472 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1212 23:16:28.227775    8472 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1212 23:16:28.228810    8472 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-392000-m02 san=[172.30.56.38 172.30.56.38 localhost 127.0.0.1 minikube multinode-392000-m02]
	I1212 23:16:28.608171    8472 provision.go:172] copyRemoteCerts
	I1212 23:16:28.622324    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:28.622324    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:30.750172    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:30.750561    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:33.272878    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:33.273157    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:33.273672    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:33.380622    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7582767s)
	I1212 23:16:33.380733    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1212 23:16:33.380808    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 23:16:33.420401    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1212 23:16:33.420965    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:16:33.458601    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1212 23:16:33.458774    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:16:33.496244    8472 provision.go:86] duration metric: configureAuth took 14.5058679s
	I1212 23:16:33.496324    8472 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:33.496868    8472 config.go:182] Loaded profile config "multinode-392000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 23:16:33.497008    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:35.573518    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:38.145631    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:38.152182    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:38.152702    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:38.152702    8472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 23:16:38.292294    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 23:16:38.292294    8472 buildroot.go:70] root file system type: tmpfs
	I1212 23:16:38.292555    8472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 23:16:38.292555    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:40.464946    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:40.465319    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:42.999493    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:43.007365    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:43.008294    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:43.008294    8472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.51.245"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 23:16:43.171083    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.51.245
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 23:16:43.171185    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:45.284506    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:45.284624    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:47.795520    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:47.800669    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:47.801716    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:16:47.801716    8472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 23:16:48.748338    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 23:16:48.748338    8472 machine.go:91] provisioned docker machine in 39.5040974s
	I1212 23:16:48.748338    8472 client.go:171] LocalClient.Create took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:167] duration metric: libmachine.API.Create for "multinode-392000" took 1m49.1099214s
	I1212 23:16:48.748338    8472 start.go:300] post-start starting for "multinode-392000-m02" (driver="hyperv")
	I1212 23:16:48.748887    8472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:48.762204    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:48.762204    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:50.863649    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:50.863756    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:53.416190    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:53.416608    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:16:53.526358    8472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7640815s)
	I1212 23:16:53.541029    8472 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:53.550919    8472 command_runner.go:130] > NAME=Buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 23:16:53.550919    8472 command_runner.go:130] > ID=buildroot
	I1212 23:16:53.550919    8472 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:16:53.550919    8472 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:16:53.551099    8472 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1212 23:16:53.551174    8472 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1212 23:16:53.552635    8472 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1212 23:16:53.552635    8472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> /etc/ssl/certs/138162.pem
	I1212 23:16:53.567223    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:53.582208    8472 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1212 23:16:53.623271    8472 start.go:303] post-start completed in 4.8749111s
	I1212 23:16:53.626212    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:16:55.698443    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:55.698604    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:16:58.238918    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:16:58.239486    8472 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-392000\config.json ...
	I1212 23:16:58.242308    8472 start.go:128] duration metric: createHost completed in 1m58.6051335s
	I1212 23:16:58.242308    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:00.321420    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:00.321547    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:02.858363    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:02.864207    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:02.864907    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:02.864907    8472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:03.006436    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423023.005320607
	
	I1212 23:17:03.006436    8472 fix.go:206] guest clock: 1702423023.005320607
	I1212 23:17:03.006436    8472 fix.go:219] Guest: 2023-12-12 23:17:03.005320607 +0000 UTC Remote: 2023-12-12 23:16:58.2423084 +0000 UTC m=+328.348317501 (delta=4.763012207s)
	I1212 23:17:03.006606    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:05.102311    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:05.102376    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:07.625460    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:07.631708    8472 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:07.632284    8472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.56.38 22 <nil> <nil>}
	I1212 23:17:07.632480    8472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702423023
	I1212 23:17:07.785418    8472 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 12 23:17:03 UTC 2023
	
	I1212 23:17:07.785481    8472 fix.go:226] clock set: Tue Dec 12 23:17:03 UTC 2023
	 (err=<nil>)
	I1212 23:17:07.785481    8472 start.go:83] releasing machines lock for "multinode-392000-m02", held for 2m8.1482636s
	I1212 23:17:07.785678    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:09.909750    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:09.909833    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:12.451220    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:12.452194    8472 out.go:177] * Found network options:
	I1212 23:17:12.452963    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.453612    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.454421    8472 out.go:177]   - NO_PROXY=172.30.51.245
	W1212 23:17:12.455285    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:17:12.456641    8472 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:17:12.458904    8472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:12.459089    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:12.471636    8472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:17:12.471636    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-392000-m02 ).state
	I1212 23:17:14.665006    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665193    8472 main.go:141] libmachine: [stdout =====>] : Running
	
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:14.665280    8472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-392000-m02 ).networkadapters[0]).ipaddresses[0]
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.329644    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.330171    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.349676    8472 main.go:141] libmachine: [stdout =====>] : 172.30.56.38
	
	I1212 23:17:17.349791    8472 main.go:141] libmachine: [stderr =====>] : 
	I1212 23:17:17.350393    8472 sshutil.go:53] new ssh client: &{IP:172.30.56.38 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-392000-m02\id_rsa Username:docker}
	I1212 23:17:17.520588    8472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:17:17.520698    8472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0616953s)
	I1212 23:17:17.520789    8472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1212 23:17:17.520789    8472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0491302s)
	W1212 23:17:17.520789    8472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:17.540506    8472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:17.565496    8472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:17:17.565496    8472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:17.565629    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:17.565729    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:17.592642    8472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 23:17:17.606915    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 23:17:17.641476    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 23:17:17.660823    8472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 23:17:17.677875    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 23:17:17.711806    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.740097    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 23:17:17.771613    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 23:17:17.803488    8472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:17.833971    8472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 23:17:17.864431    8472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:17.880090    8472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:17:17.891942    8472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:17.921922    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:18.092747    8472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 23:17:18.119496    8472 start.go:475] detecting cgroup driver to use...
	I1212 23:17:18.134351    8472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Unit]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 23:17:18.152056    8472 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 23:17:18.152056    8472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 23:17:18.152056    8472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitBurst=3
	I1212 23:17:18.152056    8472 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 23:17:18.152056    8472 command_runner.go:130] > [Service]
	I1212 23:17:18.152056    8472 command_runner.go:130] > Type=notify
	I1212 23:17:18.152056    8472 command_runner.go:130] > Restart=on-failure
	I1212 23:17:18.152056    8472 command_runner.go:130] > Environment=NO_PROXY=172.30.51.245
	I1212 23:17:18.152056    8472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 23:17:18.152056    8472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 23:17:18.152056    8472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 23:17:18.152056    8472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 23:17:18.152056    8472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 23:17:18.152056    8472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 23:17:18.152056    8472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1212 23:17:18.152056    8472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 23:17:18.152056    8472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNOFILE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitNPROC=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > LimitCORE=infinity
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 23:17:18.152056    8472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 23:17:18.153073    8472 command_runner.go:130] > TasksMax=infinity
	I1212 23:17:18.153073    8472 command_runner.go:130] > TimeoutStartSec=0
	I1212 23:17:18.153073    8472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 23:17:18.153073    8472 command_runner.go:130] > Delegate=yes
	I1212 23:17:18.153073    8472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 23:17:18.153073    8472 command_runner.go:130] > KillMode=process
	I1212 23:17:18.153073    8472 command_runner.go:130] > [Install]
	I1212 23:17:18.153073    8472 command_runner.go:130] > WantedBy=multi-user.target
	I1212 23:17:18.165057    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.196057    8472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:18.246410    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:18.280066    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.313237    8472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 23:17:18.368580    8472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 23:17:18.388251    8472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:18.419806    8472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 23:17:18.434054    8472 ssh_runner.go:195] Run: which cri-dockerd
	I1212 23:17:18.440054    8472 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 23:17:18.453333    8472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 23:17:18.468540    8472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 23:17:18.509927    8472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 23:17:18.683814    8472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 23:17:18.837593    8472 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 23:17:18.838769    8472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 23:17:18.883547    8472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:19.063745    8472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 23:18:20.172717    8472 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I1212 23:18:20.172717    8472 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I1212 23:18:20.172717    8472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1086969s)
	I1212 23:18:20.190447    8472 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1212 23:18:20.208531    8472 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.208822    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	I1212 23:18:20.208875    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	I1212 23:18:20.208924    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	I1212 23:18:20.208955    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.208996    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1212 23:18:20.209120    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209653    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209701    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209877    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209915    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.209960    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210035    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1212 23:18:20.210087    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1212 23:18:20.210128    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1212 23:18:20.210180    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210221    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210255    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210295    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210329    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210368    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210401    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I1212 23:18:20.210557    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1212 23:18:20.210678    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	I1212 23:18:20.210784    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I1212 23:18:20.210826    8472 command_runner.go:130] > Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I1212 23:18:20.218077    8472 out.go:177] 
	W1212 23:18:20.218999    8472 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 23:15:58 UTC, ends at Tue 2023-12-12 23:18:20 UTC. --
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.331741436Z" level=info msg="Starting up"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.332827739Z" level=info msg="containerd not running, starting managed containerd"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.333919343Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=681
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.365275750Z" level=info msg="starting containerd" revision=4e1fe7492b9df85914c389d1f15a3ceedbb280ac version=v1.7.10
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391200738Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.391293938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393498646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393668447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.393950948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394197448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394360449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394521149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394747050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.394938151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395413253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395501553Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395518553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395751454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.395838654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396110355Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.396196255Z" level=info msg="metadata content store policy set" policy=shared
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406639691Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406690491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406707991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406761091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406781291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406846291Z" level=info msg="NRI interface is disabled by configuration."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.406901492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407052592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407088892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407106492Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407188093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407257293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407277793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407291993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407541694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407563494Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407630394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407661094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.407735694Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408000095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408687398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408844098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408883198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.408938499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409034299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409074399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409110099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409232700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409262900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409291700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409340500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409356500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409437300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409484100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409502401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409519201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409532201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409573901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409587801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409600401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409632401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409645601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409657301Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.409927202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410045202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410186303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 12 23:16:48 multinode-392000-m02 dockerd[681]: time="2023-12-12T23:16:48.410229503Z" level=info msg="containerd successfully booted in 0.045918s"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.443854718Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.463475184Z" level=info msg="Loading containers: start."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.672639397Z" level=info msg="Loading containers: done."
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691112460Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691132360Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691139260Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691144760Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691225060Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.691323760Z" level=info msg="Daemon has completed initialization"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744545642Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 12 23:16:48 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:16:48.744815943Z" level=info msg="API listen on [::]:2376"
	Dec 12 23:16:48 multinode-392000-m02 systemd[1]: Started Docker Application Container Engine.
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.085735578Z" level=info msg="Processing signal 'terminated'"
	Dec 12 23:17:19 multinode-392000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087707378Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.087710178Z" level=info msg="Daemon shutdown complete"
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088155778Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 12 23:17:19 multinode-392000-m02 dockerd[674]: time="2023-12-12T23:17:19.088181378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: docker.service: Succeeded.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Dec 12 23:17:20 multinode-392000-m02 systemd[1]: Starting Docker Application Container Engine...
	Dec 12 23:17:20 multinode-392000-m02 dockerd[1010]: time="2023-12-12T23:17:20.162493278Z" level=info msg="Starting up"
	Dec 12 23:18:20 multinode-392000-m02 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 12 23:18:20 multinode-392000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1212 23:18:20.219707    8472 out.go:239] * 
	W1212 23:18:20.220544    8472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:18:20.221540    8472 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:41:49 UTC. --
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.282437620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.284918206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.285109705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286113599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:56.286332798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7694fc2e072409c82e9a89c81cdb1dbf3955a826194d4c6ce69896a818ffd8c/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:56 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:14:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eec0e2bb8f7fb3f97224e573a86f1d0c8af411baddfa1adaa20402928c80977d/resolv.conf as [nameserver 172.30.48.1]"
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.073894364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074049263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074069063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.074078763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132115055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132325154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132351354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:14:57 multinode-392000 dockerd[1324]: time="2023-12-12T23:14:57.132362153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.818830729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820198629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820221327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:56 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:56.820295222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:57 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8f16e239bc98e7eb9dc0c53fd98c42346ab8c95f8981cda5dde4865c3765b9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 23:18:58 multinode-392000 cri-dockerd[1210]: time="2023-12-12T23:18:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524301867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524431958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524458956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 23:18:58 multinode-392000 dockerd[1324]: time="2023-12-12T23:18:58.524471055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c0d1460fe14b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Running             busybox                   0                   ef8f16e239bc9       busybox-5bc68d56bd-x7ldl
	d33bb583a4c67       ead0a4a53df89                                                                                         26 minutes ago      Running             coredns                   0                   eec0e2bb8f7fb       coredns-5dd5756b68-4xn8h
	f6b34e581fc6d       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   d7694fc2e0724       storage-provisioner
	58046948f7a39       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              27 minutes ago      Running             kindnet-cni               0                   13c6e0fbb4c87       kindnet-bpcxd
	a260d7090f938       83f6cc407eed8                                                                                         27 minutes ago      Running             kube-proxy                0                   60c6b551ada48       kube-proxy-55nr8
	2313251d444bd       e3db313c6dbc0                                                                                         27 minutes ago      Running             kube-scheduler            0                   2f8be6d8ad0b8       kube-scheduler-multinode-392000
	22eab41fa9507       73deb9a3f7025                                                                                         27 minutes ago      Running             etcd                      0                   bb073669c83d7       etcd-multinode-392000
	235957741d342       d058aa5ab969c                                                                                         27 minutes ago      Running             kube-controller-manager   0                   0a157140134cc       kube-controller-manager-multinode-392000
	6c354edfe4229       7fe0e6f37db33                                                                                         27 minutes ago      Running             kube-apiserver            0                   74927bb72940a       kube-apiserver-multinode-392000
	
	* 
	* ==> coredns [d33bb583a4c6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = cc2ba5aac5f285415717ace34133aafabe85ba31078710c0f3cd9131a1adf7cfd7e4bf01760fa119f705fbfb69f9e2d72a302f1bbc783818a8e680f5d229514e
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52436 - 14801 "HINFO IN 6583598644721938310.5334892932610769491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082658561s
	[INFO] 10.244.0.3:45313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412009s
	[INFO] 10.244.0.3:57910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.064058426s
	[INFO] 10.244.0.3:37802 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.037057868s
	[INFO] 10.244.0.3:53205 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.098326683s
	[INFO] 10.244.0.3:48065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120602s
	[INFO] 10.244.0.3:58616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.050508538s
	[INFO] 10.244.0.3:60247 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.0.3:38852 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191504s
	[INFO] 10.244.0.3:34962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01262466s
	[INFO] 10.244.0.3:40837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094102s
	[INFO] 10.244.0.3:50511 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000205404s
	[INFO] 10.244.0.3:46775 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218404s
	[INFO] 10.244.0.3:51546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092302s
	[INFO] 10.244.0.3:51278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170504s
	[INFO] 10.244.0.3:40156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096702s
	[INFO] 10.244.0.3:57387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190803s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170703s
	[INFO] 10.244.0.3:48895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108502s
	[INFO] 10.244.0.3:34622 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141402s
	[INFO] 10.244.0.3:36375 - 5 "PTR IN 1.48.30.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000268705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-392000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_14_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:41:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:40:02 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:40:02 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:40:02 +0000   Tue, 12 Dec 2023 23:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:40:02 +0000   Tue, 12 Dec 2023 23:14:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.51.245
	  Hostname:    multinode-392000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 430cf12d1f18486bbb2dad5ba35f34f7
	  System UUID:                7ad4f3ea-4ba4-0c41-b258-b71782793bdf
	  Boot ID:                    de054c31-4928-4877-9a0d-94e8f25eb559
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x7ldl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-4xn8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-392000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-bpcxd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-392000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-multinode-392000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-55nr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-392000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node multinode-392000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node multinode-392000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node multinode-392000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node multinode-392000 event: Registered Node multinode-392000 in Controller
	  Normal  NodeReady                26m                kubelet          Node multinode-392000 status is now: NodeReady
	
	
	Name:               multinode-392000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-392000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-392000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_41_21_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-392000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:41:27 +0000   Tue, 12 Dec 2023 23:41:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:41:27 +0000   Tue, 12 Dec 2023 23:41:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:41:27 +0000   Tue, 12 Dec 2023 23:41:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:41:27 +0000   Tue, 12 Dec 2023 23:41:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.60.150
	  Hostname:    multinode-392000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 506eb669338b480cbe25ee812f7a4956
	  System UUID:                93e58034-5f25-104c-8ce8-7830c4ca3032
	  Boot ID:                    2fa7501f-6386-48b8-b30b-3a8b85531f11
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4rg9t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-gl8th               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m57s
	  kube-system                 kube-proxy-rmg5p            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 27s                    kube-proxy  
	  Normal  Starting                 6m47s                  kube-proxy  
	  Normal  Starting                 6m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m57s (x2 over 6m57s)  kubelet     Node multinode-392000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s (x2 over 6m57s)  kubelet     Node multinode-392000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m57s (x2 over 6m57s)  kubelet     Node multinode-392000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m37s                  kubelet     Node multinode-392000-m03 status is now: NodeReady
	  Normal  Starting                 30s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x2 over 30s)      kubelet     Node multinode-392000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x2 over 30s)      kubelet     Node multinode-392000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x2 over 30s)      kubelet     Node multinode-392000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                22s                    kubelet     Node multinode-392000-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.254662] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.084744] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170112] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.825297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:13] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136611] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[ +29.496244] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.608816] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.164324] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.190534] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +1.324953] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.324912] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +0.169479] systemd-fstab-generator[1166]: Ignoring "noauto" for root device
	[  +0.169520] systemd-fstab-generator[1177]: Ignoring "noauto" for root device
	[  +0.165018] systemd-fstab-generator[1188]: Ignoring "noauto" for root device
	[  +0.210508] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Dec12 23:14] systemd-fstab-generator[1309]: Ignoring "noauto" for root device
	[  +2.134792] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.270408] systemd-fstab-generator[1690]: Ignoring "noauto" for root device
	[  +0.838733] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.996306] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[ +24.543609] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [22eab41fa950] <==
	* {"level":"info","ts":"2023-12-12T23:14:20.357823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"93ff368cdeea47a1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.357835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 93ff368cdeea47a1 elected leader 93ff368cdeea47a1 at term 2"}
	{"level":"info","ts":"2023-12-12T23:14:20.361772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.36777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"93ff368cdeea47a1","local-member-attributes":"{Name:multinode-392000 ClientURLs:[https://172.30.51.245:2379]}","request-path":"/0/members/93ff368cdeea47a1/attributes","cluster-id":"577d8ccb6648d9a8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:14:20.367821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.367989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:14:20.370538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.372122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.51.245:2379"}
	{"level":"info","ts":"2023-12-12T23:14:20.409981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"577d8ccb6648d9a8","local-member-id":"93ff368cdeea47a1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410139Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:14:20.410406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:14:20.410799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:24:20.417791Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2023-12-12T23:24:20.419362Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":681,"took":"1.040537ms","hash":778906542}
	{"level":"info","ts":"2023-12-12T23:24:20.419458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":778906542,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:29:20.427361Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-12-12T23:29:20.428786Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"784.101µs","hash":2156113925}
	{"level":"info","ts":"2023-12-12T23:29:20.428884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2156113925,"revision":922,"compact-revision":681}
	{"level":"info","ts":"2023-12-12T23:34:20.436518Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2023-12-12T23:34:20.438268Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1163,"took":"858.507µs","hash":3676843287}
	{"level":"info","ts":"2023-12-12T23:34:20.438371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3676843287,"revision":1163,"compact-revision":922}
	{"level":"info","ts":"2023-12-12T23:39:20.444977Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1405}
	{"level":"info","ts":"2023-12-12T23:39:20.44615Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1405,"took":"727.102µs","hash":2832118653}
	{"level":"info","ts":"2023-12-12T23:39:20.446222Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2832118653,"revision":1405,"compact-revision":1163}
	
	* 
	* ==> kernel <==
	*  23:41:49 up 29 min,  0 users,  load average: 0.20, 0.29, 0.35
	Linux multinode-392000 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [58046948f7a3] <==
	* I1212 23:40:52.677452       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:40:52.677574       1 main.go:227] handling current node
	I1212 23:40:52.677688       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:40:52.677705       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:41:02.685653       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:41:02.685831       1 main.go:227] handling current node
	I1212 23:41:02.685885       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:41:02.685994       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:41:12.701671       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:41:12.701769       1 main.go:227] handling current node
	I1212 23:41:12.701785       1 main.go:223] Handling node with IPs: map[172.30.48.192:{}]
	I1212 23:41:12.701793       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.1.0/24] 
	I1212 23:41:22.709064       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:41:22.709170       1 main.go:227] handling current node
	I1212 23:41:22.709185       1 main.go:223] Handling node with IPs: map[172.30.60.150:{}]
	I1212 23:41:22.709194       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.2.0/24] 
	I1212 23:41:22.709783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.30.60.150 Flags: [] Table: 0} 
	I1212 23:41:32.727350       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:41:32.727453       1 main.go:227] handling current node
	I1212 23:41:32.727469       1 main.go:223] Handling node with IPs: map[172.30.60.150:{}]
	I1212 23:41:32.727478       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.2.0/24] 
	I1212 23:41:42.734409       1 main.go:223] Handling node with IPs: map[172.30.51.245:{}]
	I1212 23:41:42.734534       1 main.go:227] handling current node
	I1212 23:41:42.734549       1 main.go:223] Handling node with IPs: map[172.30.60.150:{}]
	I1212 23:41:42.734558       1 main.go:250] Node multinode-392000-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [6c354edfe422] <==
	* I1212 23:14:22.966861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:14:22.967846       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:14:22.980339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:14:23.000634       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:14:23.000942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:14:23.002240       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:14:23.002278       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:14:23.002287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:14:23.002295       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:14:23.011378       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:14:23.760921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:14:23.770137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:14:23.770155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:14:24.576880       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:14:24.669218       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:14:24.814943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:14:24.825391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.51.245]
	I1212 23:14:24.827160       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:14:24.832899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:14:24.873569       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:14:26.688119       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:14:26.703417       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:14:26.718299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:14:38.752415       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:14:39.103035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [235957741d34] <==
	* I1212 23:18:56.394927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.064871ms"
	I1212 23:18:56.421496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.459964ms"
	I1212 23:18:56.445750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.867827ms"
	I1212 23:18:56.446077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.493µs"
	I1212 23:18:59.452572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.321812ms"
	I1212 23:18:59.452821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.694µs"
	I1212 23:34:52.106307       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-392000-m03\" does not exist"
	I1212 23:34:52.120727       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-392000-m03" podCIDRs=["10.244.1.0/24"]
	I1212 23:34:52.134312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rmg5p"
	I1212 23:34:52.139634       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gl8th"
	I1212 23:34:53.581868       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-392000-m03"
	I1212 23:34:53.582294       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-392000-m03 event: Registered Node multinode-392000-m03 in Controller"
	I1212 23:35:12.788142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-392000-m03"
	I1212 23:38:28.652412       1 event.go:307] "Event occurred" object="multinode-392000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-392000-m03 status is now: NodeNotReady"
	I1212 23:38:28.666618       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-rmg5p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1212 23:38:28.680826       1 event.go:307] "Event occurred" object="kube-system/kindnet-gl8th" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1212 23:38:57.271941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="111.7µs"
	I1212 23:41:20.027809       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-392000-m03\" does not exist"
	I1212 23:41:20.037751       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-392000-m03" podCIDRs=["10.244.2.0/24"]
	I1212 23:41:27.908734       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-392000-m03"
	I1212 23:41:27.927017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="92.8µs"
	I1212 23:41:27.942497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.5µs"
	I1212 23:41:28.740859       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-4rg9t" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-4rg9t"
	I1212 23:41:30.324515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.629813ms"
	I1212 23:41:30.325200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.301µs"
	
	* 
	* ==> kube-proxy [a260d7090f93] <==
	* I1212 23:14:40.548388       1 server_others.go:69] "Using iptables proxy"
	I1212 23:14:40.568436       1 node.go:141] Successfully retrieved node IP: 172.30.51.245
	I1212 23:14:40.635432       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:14:40.635716       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:14:40.638923       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:14:40.639152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:14:40.639551       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:14:40.640017       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:14:40.641081       1 config.go:188] "Starting service config controller"
	I1212 23:14:40.641288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:14:40.641685       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:14:40.641937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:14:40.644879       1 config.go:315] "Starting node config controller"
	I1212 23:14:40.645073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:14:40.742503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:14:40.742567       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:14:40.745261       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2313251d444b] <==
	* W1212 23:14:22.973548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:22.973806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.868650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:14:23.868677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:14:23.880821       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:14:23.880850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:14:23.906825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:14:23.907043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:14:23.908460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:14:23.909050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:14:23.954797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:23.954886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:23.961825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:14:23.961846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:14:24.085183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:14:24.085212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:14:24.103672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:14:24.103696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:14:24.119305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:14:24.119483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 23:14:24.143381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:14:24.143650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:14:24.300755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:14:24.300991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:14:25.823950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:12:32 UTC, ends at Tue 2023-12-12 23:41:50 UTC. --
	Dec 12 23:35:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:36:27 multinode-392000 kubelet[2682]: E1212 23:36:27.005054    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:27 multinode-392000 kubelet[2682]: E1212 23:37:27.014710    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:37:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:38:27 multinode-392000 kubelet[2682]: E1212 23:38:27.002495    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:38:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:39:27 multinode-392000 kubelet[2682]: E1212 23:39:27.002722    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:39:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:39:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:39:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:40:27 multinode-392000 kubelet[2682]: E1212 23:40:27.005164    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:40:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:40:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:40:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:41:27 multinode-392000 kubelet[2682]: E1212 23:41:27.003192    2682 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:41:27 multinode-392000 kubelet[2682]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:41:27 multinode-392000 kubelet[2682]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:41:27 multinode-392000 kubelet[2682]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:41:42.134118   10628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-392000 -n multinode-392000: (11.874662s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-392000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (162.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (442.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.3559912533.exe start -p running-upgrade-100500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.3559912533.exe start -p running-upgrade-100500 --memory=2200 --vm-driver=hyperv: (4m25.8961373s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-100500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-100500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (1m48.7423584s)

                                                
                                                
-- stdout --
	* [running-upgrade-100500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-100500 in cluster running-upgrade-100500
	* Updating the running hyperv "running-upgrade-100500" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:11:53.638102    3552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1213 00:11:53.721351    3552 out.go:296] Setting OutFile to fd 1716 ...
	I1213 00:11:53.722350    3552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:11:53.722350    3552 out.go:309] Setting ErrFile to fd 1672...
	I1213 00:11:53.722350    3552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:11:53.747130    3552 out.go:303] Setting JSON to false
	I1213 00:11:53.753244    3552 start.go:128] hostinfo: {"hostname":"minikube7","uptime":79911,"bootTime":1702346402,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1213 00:11:53.753244    3552 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1213 00:11:53.754231    3552 out.go:177] * [running-upgrade-100500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1213 00:11:53.755238    3552 notify.go:220] Checking for updates...
	I1213 00:11:53.755238    3552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1213 00:11:53.756242    3552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:11:53.757236    3552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1213 00:11:53.758241    3552 out.go:177]   - MINIKUBE_LOCATION=17761
	I1213 00:11:53.759243    3552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:11:53.760240    3552 config.go:182] Loaded profile config "running-upgrade-100500": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1213 00:11:53.760240    3552 start_flags.go:694] config upgrade: Driver=hyperv
	I1213 00:11:53.760240    3552 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1213 00:11:53.760240    3552 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-100500\config.json ...
	I1213 00:11:53.764253    3552 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1213 00:11:53.765241    3552 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:11:59.792148    3552 out.go:177] * Using the hyperv driver based on existing profile
	I1213 00:11:59.850231    3552 start.go:298] selected driver: hyperv
	I1213 00:11:59.850542    3552 start.go:902] validating driver "hyperv" against &{Name:running-upgrade-100500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.30.55.238 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1213 00:11:59.850932    3552 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:11:59.907089    3552 cni.go:84] Creating CNI manager for ""
	I1213 00:11:59.907089    3552 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1213 00:11:59.907089    3552 start_flags.go:323] config:
	{Name:running-upgrade-100500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.30.55.238 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1213 00:11:59.907089    3552 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:11:59.942536    3552 out.go:177] * Starting control plane node running-upgrade-100500 in cluster running-upgrade-100500
	I1213 00:11:59.944270    3552 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1213 00:11:59.991893    3552 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1213 00:11:59.992724    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 00:11:59.992724    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1213 00:11:59.992942    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1213 00:11:59.992942    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1213 00:11:59.992724    3552 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\running-upgrade-100500\config.json ...
	I1213 00:11:59.992724    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1213 00:11:59.992724    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1213 00:11:59.992942    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1213 00:11:59.993142    3552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1213 00:11:59.997119    3552 start.go:365] acquiring machines lock for running-upgrade-100500: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:12:00.214311    3552 cache.go:107] acquiring lock: {Name:mke680978131adbec647605a81bab7c783de93d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.214311    3552 cache.go:107] acquiring lock: {Name:mkeac0ccf1d6f0e0eb0c19801602a218964c6025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.214890    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I1213 00:12:00.214921    3552 cache.go:107] acquiring lock: {Name:mk3a663ba67028a054dd5a6e96ba367c56e950d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.214921    3552 cache.go:107] acquiring lock: {Name:mkc6e9060bea9211e4f8126ac5de344442cb8c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.214921    3552 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 221.9779ms
	I1213 00:12:00.214921    3552 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I1213 00:12:00.214921    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I1213 00:12:00.215295    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I1213 00:12:00.215295    3552 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 221.7771ms
	I1213 00:12:00.215295    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I1213 00:12:00.215295    3552 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I1213 00:12:00.215447    3552 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 221.9286ms
	I1213 00:12:00.215615    3552 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I1213 00:12:00.215615    3552 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 222.3892ms
	I1213 00:12:00.215615    3552 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I1213 00:12:00.217258    3552 cache.go:107] acquiring lock: {Name:mk6522f86f404131d1768d0de0ce775513ec42e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.217320    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I1213 00:12:00.217320    3552 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 224.3772ms
	I1213 00:12:00.217859    3552 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I1213 00:12:00.227191    3552 cache.go:107] acquiring lock: {Name:mk43c24b3570a50e54ec9f1dc43aba5ea2e54859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.227273    3552 cache.go:107] acquiring lock: {Name:mk945c9573a262bf2c410f3ec338c9e4cbac7ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.227273    3552 cache.go:107] acquiring lock: {Name:mk1869bccfa4db5e538bd31af28e9c95a48df16c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:12:00.227273    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I1213 00:12:00.227273    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I1213 00:12:00.227273    3552 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1213 00:12:00.227273    3552 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 233.6023ms
	I1213 00:12:00.227273    3552 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 234.5476ms
	I1213 00:12:00.227814    3552 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I1213 00:12:00.227882    3552 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I1213 00:12:00.227273    3552 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 234.5476ms
	I1213 00:12:00.227882    3552 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1213 00:12:00.227882    3552 cache.go:87] Successfully saved all images to host disk.
	I1213 00:12:02.069021    3552 start.go:369] acquired machines lock for "running-upgrade-100500" in 2.0718929s
	I1213 00:12:02.069021    3552 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:12:02.069021    3552 fix.go:54] fixHost starting: minikube
	I1213 00:12:02.071599    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:04.474044    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:04.474127    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:04.474127    3552 fix.go:102] recreateIfNeeded on running-upgrade-100500: state=Running err=<nil>
	W1213 00:12:04.474199    3552 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:12:04.475068    3552 out.go:177] * Updating the running hyperv "running-upgrade-100500" VM ...
	I1213 00:12:04.475862    3552 machine.go:88] provisioning docker machine ...
	I1213 00:12:04.475937    3552 buildroot.go:166] provisioning hostname "running-upgrade-100500"
	I1213 00:12:04.476068    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:06.867430    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:06.867598    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:06.867598    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:09.943348    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:09.943348    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:09.951517    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:12:09.951517    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:12:09.951517    3552 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-100500 && echo "running-upgrade-100500" | sudo tee /etc/hostname
	I1213 00:12:10.115617    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-100500
	
	I1213 00:12:10.115617    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:13.152217    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:13.152217    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:13.152217    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:15.773986    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:15.774304    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:15.780509    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:12:15.781241    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:12:15.781320    3552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-100500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-100500/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-100500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:12:15.903700    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:12:15.903789    3552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I1213 00:12:15.903789    3552 buildroot.go:174] setting up certificates
	I1213 00:12:15.903789    3552 provision.go:83] configureAuth start
	I1213 00:12:15.903879    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:18.065861    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:18.065861    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:18.065861    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:20.902693    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:20.903016    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:20.903078    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:23.119824    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:23.119824    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:23.120112    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:25.779636    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:25.779636    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:25.779636    3552 provision.go:138] copyHostCerts
	I1213 00:12:25.779636    3552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I1213 00:12:25.779636    3552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I1213 00:12:25.780666    3552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1213 00:12:25.782103    3552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I1213 00:12:25.782103    3552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I1213 00:12:25.782531    3552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 00:12:25.783847    3552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I1213 00:12:25.783910    3552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I1213 00:12:25.784282    3552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 00:12:25.784944    3552 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-100500 san=[172.30.55.238 172.30.55.238 localhost 127.0.0.1 minikube running-upgrade-100500]
	I1213 00:12:26.095951    3552 provision.go:172] copyRemoteCerts
	I1213 00:12:26.110630    3552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:12:26.110630    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:28.525888    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:28.525888    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:28.526053    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:31.168520    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:31.168712    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:31.169567    3552 sshutil.go:53] new ssh client: &{IP:172.30.55.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-100500\id_rsa Username:docker}
	I1213 00:12:31.269999    3552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.159345s)
	I1213 00:12:31.270760    3552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 00:12:31.289989    3552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:12:31.312624    3552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:12:31.336118    3552 provision.go:86] duration metric: configureAuth took 15.432258s
	I1213 00:12:31.336118    3552 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:12:31.336118    3552 config.go:182] Loaded profile config "running-upgrade-100500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1213 00:12:31.336648    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:33.602980    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:33.603050    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:33.603140    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:36.254917    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:36.254957    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:36.261444    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:12:36.262274    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:12:36.262334    3552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 00:12:36.389208    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 00:12:36.389313    3552 buildroot.go:70] root file system type: tmpfs
	I1213 00:12:36.389446    3552 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 00:12:36.389446    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:38.605204    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:38.605204    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:38.605204    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:41.270953    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:41.270953    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:41.275988    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:12:41.276948    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:12:41.276948    3552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 00:12:41.416206    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 00:12:41.416206    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:12:43.634585    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:12:43.634690    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:43.634935    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:12:46.690156    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:12:46.690352    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:12:46.696823    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:12:46.697651    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:12:46.697651    3552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 00:13:01.298707    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 00:13:01.298707    3552 machine.go:91] provisioned docker machine in 56.8225439s
	I1213 00:13:01.298803    3552 start.go:300] post-start starting for "running-upgrade-100500" (driver="hyperv")
	I1213 00:13:01.298803    3552 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:13:01.313359    3552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:13:01.313359    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:04.298814    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:04.298814    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:04.298941    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:07.084574    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:07.084574    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:07.084574    3552 sshutil.go:53] new ssh client: &{IP:172.30.55.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-100500\id_rsa Username:docker}
	I1213 00:13:07.190539    3552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.8771531s)
	I1213 00:13:07.208161    3552 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:13:07.221085    3552 info.go:137] Remote host: Buildroot 2019.02.7
	I1213 00:13:07.221085    3552 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I1213 00:13:07.221712    3552 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I1213 00:13:07.222896    3552 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem -> 138162.pem in /etc/ssl/certs
	I1213 00:13:07.238712    3552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:13:07.247860    3552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\138162.pem --> /etc/ssl/certs/138162.pem (1708 bytes)
	I1213 00:13:07.276990    3552 start.go:303] post-start completed in 5.9780502s
	I1213 00:13:07.276990    3552 fix.go:56] fixHost completed within 1m5.2076689s
	I1213 00:13:07.277093    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:09.507883    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:09.507931    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:09.508009    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:12.286053    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:12.286245    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:12.293152    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:13:12.293962    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:13:12.293962    3552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 00:13:12.453073    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426392.448203448
	
	I1213 00:13:12.453146    3552 fix.go:206] guest clock: 1702426392.448203448
	I1213 00:13:12.453146    3552 fix.go:219] Guest: 2023-12-13 00:13:12.448203448 +0000 UTC Remote: 2023-12-13 00:13:07.2769906 +0000 UTC m=+73.759257101 (delta=5.171212848s)
	I1213 00:13:12.453181    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:14.753250    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:14.753250    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:14.753573    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:17.576261    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:17.576488    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:17.583476    3552 main.go:141] libmachine: Using SSH client type: native
	I1213 00:13:17.584214    3552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb4f40] 0xfb7a80 <nil>  [] 0s} 172.30.55.238 22 <nil> <nil>}
	I1213 00:13:17.584214    3552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1702426392
	I1213 00:13:17.729057    3552 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Dec 13 00:13:12 UTC 2023
	
	I1213 00:13:17.729178    3552 fix.go:226] clock set: Wed Dec 13 00:13:12 UTC 2023
	 (err=<nil>)
	I1213 00:13:17.729178    3552 start.go:83] releasing machines lock for "running-upgrade-100500", held for 1m15.6598085s
	I1213 00:13:17.729313    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:20.080706    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:20.080706    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:20.080832    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:22.982729    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:22.982959    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:22.987120    3552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:13:22.987120    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:23.004829    3552 ssh_runner.go:195] Run: cat /version.json
	I1213 00:13:23.004829    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-100500 ).state
	I1213 00:13:25.662359    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:25.662488    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:25.662488    3552 main.go:141] libmachine: [stdout =====>] : Running
	
	I1213 00:13:25.662488    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:25.662488    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:25.662582    3552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-100500 ).networkadapters[0]).ipaddresses[0]
	I1213 00:13:28.725499    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:28.725551    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:28.726296    3552 sshutil.go:53] new ssh client: &{IP:172.30.55.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-100500\id_rsa Username:docker}
	I1213 00:13:28.788008    3552 main.go:141] libmachine: [stdout =====>] : 172.30.55.238
	
	I1213 00:13:28.788008    3552 main.go:141] libmachine: [stderr =====>] : 
	I1213 00:13:28.788345    3552 sshutil.go:53] new ssh client: &{IP:172.30.55.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\running-upgrade-100500\id_rsa Username:docker}
	I1213 00:13:28.904475    3552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.9173275s)
	I1213 00:13:28.904768    3552 ssh_runner.go:235] Completed: cat /version.json: (5.8998401s)
	W1213 00:13:28.904850    3552 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1213 00:13:28.920210    3552 ssh_runner.go:195] Run: systemctl --version
	I1213 00:13:28.950784    3552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:13:28.961397    3552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:13:28.981723    3552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1213 00:13:29.014725    3552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1213 00:13:29.025967    3552 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1213 00:13:29.025967    3552 start.go:475] detecting cgroup driver to use...
	I1213 00:13:29.026317    3552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:13:29.061862    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1213 00:13:29.094602    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 00:13:29.104149    3552 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 00:13:29.119683    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 00:13:29.143630    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 00:13:29.170646    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 00:13:29.195496    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 00:13:29.220296    3552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:13:29.245506    3552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 00:13:29.271296    3552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:13:29.301071    3552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:13:29.340023    3552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:13:29.564205    3552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 00:13:29.590063    3552 start.go:475] detecting cgroup driver to use...
	I1213 00:13:29.605581    3552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 00:13:29.641807    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:13:29.669406    3552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:13:29.718971    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:13:29.747715    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 00:13:29.763702    3552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:13:29.797528    3552 ssh_runner.go:195] Run: which cri-dockerd
	I1213 00:13:29.816656    3552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 00:13:29.830969    3552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1213 00:13:29.859858    3552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 00:13:30.052489    3552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 00:13:30.247974    3552 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 00:13:30.248256    3552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 00:13:30.277081    3552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:13:30.456794    3552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 00:13:42.103769    3552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.6469218s)
	I1213 00:13:42.121712    3552 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 00:13:42.176501    3552 out.go:177] 
	W1213 00:13:42.177409    3552 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Wed 2023-12-13 00:09:06 UTC, end at Wed 2023-12-13 00:13:42 UTC. --
	Dec 13 00:10:35 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.421738971Z" level=info msg="Starting up"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425019971Z" level=info msg="libcontainerd: started new containerd process" pid=2741
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425092271Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425107871Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425132271Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425170971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.472358571Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.472874071Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473055671Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473374771Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473531071Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477140371Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477467071Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477674371Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478013671Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478365571Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478565771Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478781171Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478874171Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478889171Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488280871Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488326971Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488419871Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488497971Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488513971Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488527171Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488541171Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488554071Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488565971Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488578271Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488712671Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489039671Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489726671Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489854071Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490058271Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490169671Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490188871Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490201471Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490212271Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490224571Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490235571Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490246271Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490256671Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490390571Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490559671Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490577171Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490589671Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490722971Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490913771Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490930871Z" level=info msg="containerd successfully booted in 0.020042s"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503155971Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503282171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503309171Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503344371Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504669971Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504791071Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504841071Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504855571Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541267571Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541375171Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541390071Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541397871Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541405771Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541455571Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541699371Z" level=info msg="Loading containers: start."
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.689947071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.787519271Z" level=info msg="Loading containers: done."
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.822479871Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.822705371Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.892045471Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:10:35 running-upgrade-100500 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.892678571Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.700127777Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78f927098b02df53044852318c4beb9ff841d832bfbc6e0d0594140ba16bc17d/shim.sock" debug=false pid=4392
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.704145764Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2e69b94d641498d28bf07dc8de234877062bd3dd43a65126a6901ef0be457271/shim.sock" debug=false pid=4398
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.740910963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/84d862b1b93fc128eab77e867052df2afca5ffd2ffcfac3b0b31f8a6f57213ed/shim.sock" debug=false pid=4426
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.916315476Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5b34e06ec6a0c72a6829a8da4dcea6175417daeced8917ac70f623c16fdd2882/shim.sock" debug=false pid=4463
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.919858753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b669e5f083800e54d2b9c2368c3cb2bed43428269a76c739ae8b97504d4d8c17/shim.sock" debug=false pid=4472
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.416351880Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc/shim.sock" debug=false pid=4665
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.429947957Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb/shim.sock" debug=false pid=4678
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.433678633Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47/shim.sock" debug=false pid=4682
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.439293847Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5/shim.sock" debug=false pid=4688
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.442661716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49/shim.sock" debug=false pid=4697
	Dec 13 00:12:00 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:00.161588029Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f109b8e3e9da03a474fbca2c33754b6a94b652b1c9d8bdcc021a7913e2c4bb2e/shim.sock" debug=false pid=5611
	Dec 13 00:12:00 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:00.530596568Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d/shim.sock" debug=false pid=5661
	Dec 13 00:12:02 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:02.175284976Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/541df50413a6588d8abd99df1b5e01d5eb2ce7ef15199318f187a9b863745e3a/shim.sock" debug=false pid=5823
	Dec 13 00:12:02 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:02.757412464Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53/shim.sock" debug=false pid=5887
	Dec 13 00:12:03 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:03.058011438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6b9a7319819af090b5bb6989aa477f7c75045ff3a4ceb590f4408b9c308e896/shim.sock" debug=false pid=5935
	Dec 13 00:12:03 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:03.391303908Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e/shim.sock" debug=false pid=5988
	Dec 13 00:12:04 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:04.528793028Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f109f808609efff29ff4d2cd0a7e927b07c49937c43720f0718b99925e97fa5f/shim.sock" debug=false pid=6059
	Dec 13 00:12:05 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:05.245725683Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4/shim.sock" debug=false pid=6129
	Dec 13 00:12:47 running-upgrade-100500 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:12:47 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:47.226775069Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.341996445Z" level=info msg="shim reaped" id=2e69b94d641498d28bf07dc8de234877062bd3dd43a65126a6901ef0be457271
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.349754963Z" level=info msg="shim reaped" id=a6b9a7319819af090b5bb6989aa477f7c75045ff3a4ceb590f4408b9c308e896
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.354176931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.357245677Z" level=info msg="shim reaped" id=78f927098b02df53044852318c4beb9ff841d832bfbc6e0d0594140ba16bc17d
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.360054920Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.373796530Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.377470186Z" level=info msg="shim reaped" id=9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.392824921Z" level=warning msg="9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.393102725Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.448754575Z" level=info msg="shim reaped" id=f109b8e3e9da03a474fbca2c33754b6a94b652b1c9d8bdcc021a7913e2c4bb2e
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.449833091Z" level=info msg="shim reaped" id=5b34e06ec6a0c72a6829a8da4dcea6175417daeced8917ac70f623c16fdd2882
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.451212312Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.456346591Z" level=info msg="shim reaped" id=f109f808609efff29ff4d2cd0a7e927b07c49937c43720f0718b99925e97fa5f
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.459208634Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.463185895Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.484027213Z" level=info msg="shim reaped" id=9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.494880979Z" level=warning msg="9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.495254085Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.504169521Z" level=info msg="shim reaped" id=541df50413a6588d8abd99df1b5e01d5eb2ce7ef15199318f187a9b863745e3a
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.507789276Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.530491623Z" level=info msg="shim reaped" id=11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.542383505Z" level=info msg="shim reaped" id=b669e5f083800e54d2b9c2368c3cb2bed43428269a76c739ae8b97504d4d8c17
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.544247133Z" level=info msg="shim reaped" id=63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.548167693Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.548756802Z" level=info msg="shim reaped" id=57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.552977266Z" level=warning msg="11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.555282802Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.557034928Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.557365333Z" level=warning msg="63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.559348164Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.559932973Z" level=warning msg="57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.637643959Z" level=info msg="shim reaped" id=8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663618156Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663737158Z" level=warning msg="8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663832159Z" level=info msg="shim reaped" id=84d862b1b93fc128eab77e867052df2afca5ffd2ffcfac3b0b31f8a6f57213ed
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.674121116Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.827707862Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03/shim.sock" debug=false pid=7487
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.085963779Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f/shim.sock" debug=false pid=7529
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.778765746Z" level=info msg="shim reaped" id=60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.788280489Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.788802796Z" level=warning msg="60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.850741623Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394/shim.sock" debug=false pid=7633
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.868945696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413/shim.sock" debug=false pid=7643
	Dec 13 00:12:50 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:50.175976738Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059/shim.sock" debug=false pid=7724
	Dec 13 00:12:50 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:50.186273089Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e/shim.sock" debug=false pid=7732
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.721508277Z" level=info msg="shim reaped" id=be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.729775593Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.730012997Z" level=warning msg="be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.760251123Z" level=info msg="shim reaped" id=9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.770627169Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.770757971Z" level=warning msg="9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.911750159Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b/shim.sock" debug=false pid=7933
	Dec 13 00:12:53 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:53.284047032Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3/shim.sock" debug=false pid=7998
	Dec 13 00:12:53 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:53.990196698Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253/shim.sock" debug=false pid=8048
	Dec 13 00:12:54 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:54.328580892Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8/shim.sock" debug=false pid=8099
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.466131172Z" level=info msg="Container 54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb failed to exit within 10 seconds of signal 15 - using the force"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.631482994Z" level=info msg="shim reaped" id=54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.642264632Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.642703338Z" level=warning msg="54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716668187Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716854989Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716969191Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.717095392Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.747088177Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.753139255Z" level=warning msg="b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.761824366Z" level=error msg="b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.761996368Z" level=error msg="Handler for POST /containers/b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.333178921Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.337117570Z" level=warning msg="5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.345935081Z" level=error msg="5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.346045383Z" level=error msg="Handler for POST /containers/5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.350795943Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.389891835Z" level=warning msg="17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.397382930Z" level=error msg="17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.397644133Z" level=error msg="Handler for POST /containers/17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Succeeded.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7487 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7633 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7643 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7724 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7732 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7933 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7998 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 8048 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 8099 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.778008626Z" level=info msg="Starting up"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780664359Z" level=info msg="libcontainerd: started new containerd process" pid=8245
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780741960Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780761460Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780790761Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780821961Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.827088244Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.827735952Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.828365260Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.828870967Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.829023669Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.831172596Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.831348098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.832117208Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833165621Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833684327Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833803029Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833839229Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833852929Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833863130Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833982531Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834131233Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834223134Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834472637Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834493838Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834508538Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834522438Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834535138Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834546838Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834558938Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.872673819Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.872842321Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.873579030Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874130737Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874378940Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874401740Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874472241Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874493042Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874505442Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874519442Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874531842Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874542942Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874554342Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874590843Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874607043Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874618743Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874630943Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874769445Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874927247Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874942747Z" level=info msg="containerd successfully booted in 0.049732s"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886106788Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886203489Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886244290Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886256390Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887738308Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887836310Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887918411Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.888028712Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.893620283Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978372251Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978674254Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978740855Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978809056Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979155260Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979251562Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979632366Z" level=info msg="Loading containers: start."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.130867792Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.131818404Z" level=warning msg="475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.164912306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.179347982Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.186327467Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.221721797Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.222406705Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.223664421Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.246049393Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.281190520Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.281780827Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.289456921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.291210342Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.299122738Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.308123648Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.311607290Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.353660401Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.378263001Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.380478628Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.381908745Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.382155448Z" level=warning msg="05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.393947691Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.398749450Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.561571430Z" level=info msg="Removing stale sandbox 37313b4280ad3113b5834f6e5eee0707fcdfb670ce1a0f07de1cc767d9d5c7d7 (4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.565367176Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 75669bde287108a76710bae22f576b8b2a7fa954ba60af5d6f492adb7eb8e3a1], retrying...."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.724625413Z" level=info msg="Removing stale sandbox 3f873fec02dce8136c0c4e480a63c15e2fe4bd42185768221def90be49bc81b1 (c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.728215456Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 358a036945141598e3c81e3c1d76d8b1406c2e927b6607dbb8ae4e1a93a9e6b4], retrying...."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.868171458Z" level=info msg="Removing stale sandbox 711e22165e874a0f29add00a2d96541e68db4a6fa5a2d9e86a45d7aa45952714 (e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.872146007Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 801ab2a32890a4c4cf0899d58194050829b78c9bc80f86f87a6bab1d227be518], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.002161088Z" level=info msg="Removing stale sandbox 9a26f5217d32256e20987a0d4c3bfc2f2e2691e8943875b7446cfb046ef82340 (d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253)"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.005169124Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab e7e892aa33123b5faad5af293539772927e201052101974f938dfe84ef096e50], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.137686307Z" level=info msg="Removing stale sandbox f13e5f3ec3bece2a2c7e53b69dcd227134d5af5cac3889bd0623c5c58ee74847 (b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b)"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.146016207Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 1c9e0b655f0cce11bc0f65831a85dfcc4b2558bafd8e4d83063ad81ef9daef44 77b7bc2ca901e5e13ca0a42eda6d2d1d79f4830b6160fa479febe08ea6818bc8], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.164928933Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.236960494Z" level=info msg="Loading containers: done."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.268660473Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.268841175Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:13:01 running-upgrade-100500 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.293570670Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.293742772Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.828198260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.828563164Z" level=warning msg="ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.838545684Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.844631256Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.982568405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1bc76be3e11ab77c94171c4902a1c76e21b536f1cc02d6a79ea0b75d24a8366b/shim.sock" debug=false pid=8915
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.990717002Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fde4c6348e62e8dcd785ce0618a6adf9f04a2e06623bbccb81b01ed50f341a6c/shim.sock" debug=false pid=8925
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.010332635Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e3ebea04d6f1288eed74b6bbea15273af2c0cd50dc5680f1e697acd1f11c693/shim.sock" debug=false pid=8938
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.017989425Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a88bdcaf0e6a7391fbfdbec71ad747e3dd9d3371c98d36e64fa81d807f20ceb6/shim.sock" debug=false pid=8946
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.018984136Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/831037c0998655368e41fb78123263347b64e66ed87f276850ae608b5ad1c085/shim.sock" debug=false pid=8951
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.043718327Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/69a6359d7f7c11d112023e2ddd421590e4859ff08f726650de480f0c655f49b0/shim.sock" debug=false pid=8960
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.054548654Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c8aa30bc9ddba000d0e4c494b1b5ace9ee931f6724f2fef2465e1f3034f25bf6/shim.sock" debug=false pid=8968
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.085110713Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e3fe1ef03a34998ea3bc97a30e454343b3dced7c44280b5363a13e0969feb5a/shim.sock" debug=false pid=9005
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.754103473Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d/shim.sock" debug=false pid=9251
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.777656950Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2/shim.sock" debug=false pid=9253
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.884078400Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68/shim.sock" debug=false pid=9289
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.055690205Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb/shim.sock" debug=false pid=9331
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.157726484Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a/shim.sock" debug=false pid=9369
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.543647242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362/shim.sock" debug=false pid=9447
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.843191094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.843699899Z" level=warning msg="298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.879561607Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.879899111Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:05 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:05.070885968Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/421f067083d895b7d3fefde359a367003826a4698e75c2c3ebe6d62a458002a1/shim.sock" debug=false pid=9621
	Dec 13 00:13:05 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:05.515689839Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9/shim.sock" debug=false pid=9698
	Dec 13 00:13:19 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:19.062896950Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9/shim.sock" debug=false pid=9963
	Dec 13 00:13:22 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:22.060422403Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a/shim.sock" debug=false pid=10082
	Dec 13 00:13:30 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:30.463366147Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:13:30 running-upgrade-100500 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.559590327Z" level=info msg="shim reaped" id=5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.564109852Z" level=info msg="shim reaped" id=c8aa30bc9ddba000d0e4c494b1b5ace9ee931f6724f2fef2465e1f3034f25bf6
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.569712483Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.569988784Z" level=warning msg="5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.574470909Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.609913702Z" level=info msg="shim reaped" id=a88bdcaf0e6a7391fbfdbec71ad747e3dd9d3371c98d36e64fa81d807f20ceb6
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.629626309Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.636205945Z" level=info msg="shim reaped" id=fde4c6348e62e8dcd785ce0618a6adf9f04a2e06623bbccb81b01ed50f341a6c
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.661218582Z" level=info msg="shim reaped" id=1bc76be3e11ab77c94171c4902a1c76e21b536f1cc02d6a79ea0b75d24a8366b
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.663373094Z" level=info msg="shim reaped" id=f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.665632306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.688676532Z" level=info msg="shim reaped" id=69a6359d7f7c11d112023e2ddd421590e4859ff08f726650de480f0c655f49b0
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.698853387Z" level=info msg="shim reaped" id=4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.701273500Z" level=info msg="shim reaped" id=831037c0998655368e41fb78123263347b64e66ed87f276850ae608b5ad1c085
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.710749652Z" level=info msg="shim reaped" id=2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.712770863Z" level=info msg="shim reaped" id=421f067083d895b7d3fefde359a367003826a4698e75c2c3ebe6d62a458002a1
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718595795Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718832396Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718991597Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.719413499Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.721878213Z" level=warning msg="4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.728854651Z" level=info msg="shim reaped" id=0e3ebea04d6f1288eed74b6bbea15273af2c0cd50dc5680f1e697acd1f11c693
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.733089474Z" level=info msg="shim reaped" id=dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.738427703Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.739133407Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.739630410Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.741420819Z" level=warning msg="2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.742479225Z" level=warning msg="f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746546547Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746573448Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746848749Z" level=warning msg="dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.858812360Z" level=info msg="shim reaped" id=06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.860839271Z" level=info msg="shim reaped" id=8e3fe1ef03a34998ea3bc97a30e454343b3dced7c44280b5363a13e0969feb5a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890114031Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890304732Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890521133Z" level=warning msg="06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:32 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:32.270292405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/518273dc69edb2f9b539a6d419ed705b8ab59871e83592fcc8679c97578d4676/shim.sock" debug=false pid=10986
	Dec 13 00:13:33 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:33.217831974Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9a5cdd1da5d3dc40c243f6c9e3b2d134575b94945c9f69afaf4d5f7e0f977499/shim.sock" debug=false pid=11031
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.423985710Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9cab93a2e51415cf8a51b421559883d158637d7baa551601c4f5f3b6dd1ec13b/shim.sock" debug=false pid=11089
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.828035615Z" level=info msg="shim reaped" id=be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.838141270Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.838604572Z" level=warning msg="be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.889173748Z" level=info msg="shim reaped" id=325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.901908418Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.902091219Z" level=warning msg="325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:37 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:37.574770844Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/481c45c3985e38ff8a6c897547fa632a2f8029d5cb3944b647f0df5e2e1308bd/shim.sock" debug=false pid=11236
	Dec 13 00:13:39 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:39.357450570Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d76552925044d5ddad623906b67938aea8175770e48dba45d1f2dd70419cdbf/shim.sock" debug=false pid=11303
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.788138675Z" level=info msg="Container cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d failed to exit within 10 seconds of signal 15 - using the force"
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.944702829Z" level=info msg="shim reaped" id=cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.953219776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.953466077Z" level=warning msg="cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016473721Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016610622Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016710022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016717222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.061800568Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.072622727Z" level=warning msg="4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.078738861Z" level=error msg="4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.078873961Z" level=error msg="Handler for POST /containers/4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.589558047Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.595532380Z" level=warning msg="92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.602110016Z" level=error msg="92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.602220617Z" level=error msg="Handler for POST /containers/92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Succeeded.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 10986 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11031 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11089 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11236 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11303 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.089439775Z" level=info msg="Starting up"
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092672592Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092809293Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092841793Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092868293Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.093303996Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Wed 2023-12-13 00:09:06 UTC, end at Wed 2023-12-13 00:13:42 UTC. --
	Dec 13 00:10:35 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.421738971Z" level=info msg="Starting up"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425019971Z" level=info msg="libcontainerd: started new containerd process" pid=2741
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425092271Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425107871Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425132271Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.425170971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.472358571Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.472874071Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473055671Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473374771Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.473531071Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477140371Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477467071Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.477674371Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478013671Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478365571Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478565771Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478781171Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478874171Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.478889171Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488280871Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488326971Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488419871Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488497971Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488513971Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488527171Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488541171Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488554071Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488565971Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488578271Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.488712671Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489039671Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489726671Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.489854071Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490058271Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490169671Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490188871Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490201471Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490212271Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490224571Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490235571Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490246271Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490256671Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490390571Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490559671Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490577171Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490589671Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490722971Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490913771Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.490930871Z" level=info msg="containerd successfully booted in 0.020042s"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503155971Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503282171Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503309171Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.503344371Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504669971Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504791071Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504841071Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.504855571Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541267571Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541375171Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541390071Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541397871Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541405771Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541455571Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.541699371Z" level=info msg="Loading containers: start."
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.689947071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.787519271Z" level=info msg="Loading containers: done."
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.822479871Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.822705371Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.892045471Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:10:35 running-upgrade-100500 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:10:35 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:10:35.892678571Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.700127777Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78f927098b02df53044852318c4beb9ff841d832bfbc6e0d0594140ba16bc17d/shim.sock" debug=false pid=4392
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.704145764Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2e69b94d641498d28bf07dc8de234877062bd3dd43a65126a6901ef0be457271/shim.sock" debug=false pid=4398
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.740910963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/84d862b1b93fc128eab77e867052df2afca5ffd2ffcfac3b0b31f8a6f57213ed/shim.sock" debug=false pid=4426
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.916315476Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5b34e06ec6a0c72a6829a8da4dcea6175417daeced8917ac70f623c16fdd2882/shim.sock" debug=false pid=4463
	Dec 13 00:11:38 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:38.919858753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b669e5f083800e54d2b9c2368c3cb2bed43428269a76c739ae8b97504d4d8c17/shim.sock" debug=false pid=4472
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.416351880Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc/shim.sock" debug=false pid=4665
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.429947957Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb/shim.sock" debug=false pid=4678
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.433678633Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47/shim.sock" debug=false pid=4682
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.439293847Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5/shim.sock" debug=false pid=4688
	Dec 13 00:11:39 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:11:39.442661716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49/shim.sock" debug=false pid=4697
	Dec 13 00:12:00 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:00.161588029Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f109b8e3e9da03a474fbca2c33754b6a94b652b1c9d8bdcc021a7913e2c4bb2e/shim.sock" debug=false pid=5611
	Dec 13 00:12:00 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:00.530596568Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d/shim.sock" debug=false pid=5661
	Dec 13 00:12:02 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:02.175284976Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/541df50413a6588d8abd99df1b5e01d5eb2ce7ef15199318f187a9b863745e3a/shim.sock" debug=false pid=5823
	Dec 13 00:12:02 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:02.757412464Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53/shim.sock" debug=false pid=5887
	Dec 13 00:12:03 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:03.058011438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6b9a7319819af090b5bb6989aa477f7c75045ff3a4ceb590f4408b9c308e896/shim.sock" debug=false pid=5935
	Dec 13 00:12:03 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:03.391303908Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e/shim.sock" debug=false pid=5988
	Dec 13 00:12:04 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:04.528793028Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f109f808609efff29ff4d2cd0a7e927b07c49937c43720f0718b99925e97fa5f/shim.sock" debug=false pid=6059
	Dec 13 00:12:05 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:05.245725683Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4/shim.sock" debug=false pid=6129
	Dec 13 00:12:47 running-upgrade-100500 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:12:47 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:47.226775069Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.341996445Z" level=info msg="shim reaped" id=2e69b94d641498d28bf07dc8de234877062bd3dd43a65126a6901ef0be457271
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.349754963Z" level=info msg="shim reaped" id=a6b9a7319819af090b5bb6989aa477f7c75045ff3a4ceb590f4408b9c308e896
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.354176931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.357245677Z" level=info msg="shim reaped" id=78f927098b02df53044852318c4beb9ff841d832bfbc6e0d0594140ba16bc17d
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.360054920Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.373796530Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.377470186Z" level=info msg="shim reaped" id=9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.392824921Z" level=warning msg="9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9c59b7ed3d7f9347cdd13555c8e13366f53219d5e7822338ed5ccc68a124113e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.393102725Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.448754575Z" level=info msg="shim reaped" id=f109b8e3e9da03a474fbca2c33754b6a94b652b1c9d8bdcc021a7913e2c4bb2e
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.449833091Z" level=info msg="shim reaped" id=5b34e06ec6a0c72a6829a8da4dcea6175417daeced8917ac70f623c16fdd2882
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.451212312Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.456346591Z" level=info msg="shim reaped" id=f109f808609efff29ff4d2cd0a7e927b07c49937c43720f0718b99925e97fa5f
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.459208634Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.463185895Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.484027213Z" level=info msg="shim reaped" id=9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.494880979Z" level=warning msg="9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9e3906a1f3ed50f6c1328a1d7bab63877915f2cea95f6b1f20a8ac0f49ac3c49/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.495254085Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.504169521Z" level=info msg="shim reaped" id=541df50413a6588d8abd99df1b5e01d5eb2ce7ef15199318f187a9b863745e3a
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.507789276Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.530491623Z" level=info msg="shim reaped" id=11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.542383505Z" level=info msg="shim reaped" id=b669e5f083800e54d2b9c2368c3cb2bed43428269a76c739ae8b97504d4d8c17
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.544247133Z" level=info msg="shim reaped" id=63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.548167693Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.548756802Z" level=info msg="shim reaped" id=57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.552977266Z" level=warning msg="11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/11bcdbcef74af4bb184d0f2bac6afa57c835073518e2dc09a8978955acfebebc/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.555282802Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.557034928Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.557365333Z" level=warning msg="63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/63e04d147de1eed17b8e46b122b7f71ce521f3107fef294b2e7e6116ea0c907d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.559348164Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.559932973Z" level=warning msg="57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/57c0df7cae2cfe5e1dfaeb6c11d68718204162859e955df3d05013bcbcfcea47/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.637643959Z" level=info msg="shim reaped" id=8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663618156Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663737158Z" level=warning msg="8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8662eb639365fd670b46d94d3aa74b77d2d97904df0657744feadb8e02d9c3c5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.663832159Z" level=info msg="shim reaped" id=84d862b1b93fc128eab77e867052df2afca5ffd2ffcfac3b0b31f8a6f57213ed
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.674121116Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:48 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:48.827707862Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03/shim.sock" debug=false pid=7487
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.085963779Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f/shim.sock" debug=false pid=7529
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.778765746Z" level=info msg="shim reaped" id=60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.788280489Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.788802796Z" level=warning msg="60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/60bc94f0d098b5c830e1cacf1caf227e7e568f03df155fa96f36888f8d37903f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.850741623Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394/shim.sock" debug=false pid=7633
	Dec 13 00:12:49 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:49.868945696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413/shim.sock" debug=false pid=7643
	Dec 13 00:12:50 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:50.175976738Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059/shim.sock" debug=false pid=7724
	Dec 13 00:12:50 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:50.186273089Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e/shim.sock" debug=false pid=7732
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.721508277Z" level=info msg="shim reaped" id=be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.729775593Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.730012997Z" level=warning msg="be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/be6357240e8359a4d7cbed6f27cf1c648e10ca8a29a936724d523bb5b31137d4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.760251123Z" level=info msg="shim reaped" id=9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.770627169Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.770757971Z" level=warning msg="9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9dcaee4154d43cb40de490190f65f2c423e1ba5a9b9635fad8edee873232bc53/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:52 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:52.911750159Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b/shim.sock" debug=false pid=7933
	Dec 13 00:12:53 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:53.284047032Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3/shim.sock" debug=false pid=7998
	Dec 13 00:12:53 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:53.990196698Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253/shim.sock" debug=false pid=8048
	Dec 13 00:12:54 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:54.328580892Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8/shim.sock" debug=false pid=8099
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.466131172Z" level=info msg="Container 54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb failed to exit within 10 seconds of signal 15 - using the force"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.631482994Z" level=info msg="shim reaped" id=54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.642264632Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.642703338Z" level=warning msg="54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/54aa3ecf0aa1e7b862d368ba246b5cd1b82c894a9a1e23c357730a1002163dcb/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716668187Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716854989Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.716969191Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.717095392Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.747088177Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.753139255Z" level=warning msg="b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.761824366Z" level=error msg="b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:57 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:57.761996368Z" level=error msg="Handler for POST /containers/b266dc1117aaa370ee1d3ad54371050afc88698aea3a9c3ede7b86b7056beeb3/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.333178921Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.337117570Z" level=warning msg="5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.345935081Z" level=error msg="5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.346045383Z" level=error msg="Handler for POST /containers/5d50b18ef3b4cd0245fcc10026114621356fd828fe38ed5cd559648fc7639421/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.350795943Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.389891835Z" level=warning msg="17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.397382930Z" level=error msg="17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[2733]: time="2023-12-13T00:12:58.397644133Z" level=error msg="Handler for POST /containers/17f7aa1e02bfcd5dc61b41951280da609207848085b857d5ecdb9450e55c3894/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Succeeded.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7487 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7633 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7643 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7724 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7732 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7933 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 7998 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 8048 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 8099 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:12:58 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.778008626Z" level=info msg="Starting up"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780664359Z" level=info msg="libcontainerd: started new containerd process" pid=8245
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780741960Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780761460Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780790761Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.780821961Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.827088244Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.827735952Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.828365260Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.828870967Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.829023669Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.831172596Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.831348098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.832117208Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833165621Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833684327Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833803029Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833839229Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833852929Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833863130Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.833982531Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834131233Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834223134Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834472637Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834493838Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834508538Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834522438Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834535138Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834546838Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.834558938Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.872673819Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.872842321Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.873579030Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874130737Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874378940Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874401740Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874472241Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874493042Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874505442Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874519442Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874531842Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874542942Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874554342Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874590843Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874607043Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874618743Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874630943Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874769445Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874927247Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.874942747Z" level=info msg="containerd successfully booted in 0.049732s"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886106788Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886203489Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886244290Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.886256390Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887738308Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887836310Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.887918411Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.888028712Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.893620283Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978372251Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978674254Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978740855Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.978809056Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979155260Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979251562Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 13 00:12:58 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:12:58.979632366Z" level=info msg="Loading containers: start."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.130867792Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.131818404Z" level=warning msg="475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.164912306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.179347982Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/475648402f9dd8fac6f25b8149019409a483c4c9e7753deb719f24cdc4ec422e"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.186327467Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.221721797Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.222406705Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.223664421Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.246049393Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.281190520Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.281780827Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.289456921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.291210342Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.299122738Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.308123648Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.311607290Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.353660401Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.378263001Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.380478628Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.381908745Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.382155448Z" level=warning msg="05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.393947691Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/05504ea77ff30a605975c2fc1edfc366103d7193d5e5962eeeb488969fbcd2a8"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.398749450Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.561571430Z" level=info msg="Removing stale sandbox 37313b4280ad3113b5834f6e5eee0707fcdfb670ce1a0f07de1cc767d9d5c7d7 (4da1269053d776f32b410defcaa0d65fc75414f83d65c765837792b23b2c7394)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.565367176Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 75669bde287108a76710bae22f576b8b2a7fa954ba60af5d6f492adb7eb8e3a1], retrying...."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.724625413Z" level=info msg="Removing stale sandbox 3f873fec02dce8136c0c4e480a63c15e2fe4bd42185768221def90be49bc81b1 (c0294c8fc81ba3f730e7d33444441e9b15141f4239d3b3a9da7e748849d87b03)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.728215456Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 358a036945141598e3c81e3c1d76d8b1406c2e927b6607dbb8ae4e1a93a9e6b4], retrying...."
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.868171458Z" level=info msg="Removing stale sandbox 711e22165e874a0f29add00a2d96541e68db4a6fa5a2d9e86a45d7aa45952714 (e397386a3fa398e9610bb3d0df368b8e85b3e1c1dfac5383338500c6229f7413)"
	Dec 13 00:13:00 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:00.872146007Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab 801ab2a32890a4c4cf0899d58194050829b78c9bc80f86f87a6bab1d227be518], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.002161088Z" level=info msg="Removing stale sandbox 9a26f5217d32256e20987a0d4c3bfc2f2e2691e8943875b7446cfb046ef82340 (d2d875dadd6d0c131bc11d6830af935509433f263849f9f6cdf4cd2deea7e253)"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.005169124Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0e3815aba0194611a78c7fa44e6f2cf59fcb364c68e3176098cac7ff90c379ab e7e892aa33123b5faad5af293539772927e201052101974f938dfe84ef096e50], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.137686307Z" level=info msg="Removing stale sandbox f13e5f3ec3bece2a2c7e53b69dcd227134d5af5cac3889bd0623c5c58ee74847 (b464eae6919682c950a83b0ca14dd5a9674789bd7fcdecb614ff551fb6d2489b)"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.146016207Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 1c9e0b655f0cce11bc0f65831a85dfcc4b2558bafd8e4d83063ad81ef9daef44 77b7bc2ca901e5e13ca0a42eda6d2d1d79f4830b6160fa479febe08ea6818bc8], retrying...."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.164928933Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.236960494Z" level=info msg="Loading containers: done."
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.268660473Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.268841175Z" level=info msg="Daemon has completed initialization"
	Dec 13 00:13:01 running-upgrade-100500 systemd[1]: Started Docker Application Container Engine.
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.293570670Z" level=info msg="API listen on [::]:2376"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.293742772Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.828198260Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.828563164Z" level=warning msg="ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.838545684Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/ea76ed4793e3861dd6bde359127abb299b2c94d3a8252001e0df99fc25fab059"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.844631256Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.982568405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1bc76be3e11ab77c94171c4902a1c76e21b536f1cc02d6a79ea0b75d24a8366b/shim.sock" debug=false pid=8915
	Dec 13 00:13:01 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:01.990717002Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fde4c6348e62e8dcd785ce0618a6adf9f04a2e06623bbccb81b01ed50f341a6c/shim.sock" debug=false pid=8925
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.010332635Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e3ebea04d6f1288eed74b6bbea15273af2c0cd50dc5680f1e697acd1f11c693/shim.sock" debug=false pid=8938
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.017989425Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a88bdcaf0e6a7391fbfdbec71ad747e3dd9d3371c98d36e64fa81d807f20ceb6/shim.sock" debug=false pid=8946
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.018984136Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/831037c0998655368e41fb78123263347b64e66ed87f276850ae608b5ad1c085/shim.sock" debug=false pid=8951
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.043718327Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/69a6359d7f7c11d112023e2ddd421590e4859ff08f726650de480f0c655f49b0/shim.sock" debug=false pid=8960
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.054548654Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c8aa30bc9ddba000d0e4c494b1b5ace9ee931f6724f2fef2465e1f3034f25bf6/shim.sock" debug=false pid=8968
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.085110713Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e3fe1ef03a34998ea3bc97a30e454343b3dced7c44280b5363a13e0969feb5a/shim.sock" debug=false pid=9005
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.754103473Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d/shim.sock" debug=false pid=9251
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.777656950Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2/shim.sock" debug=false pid=9253
	Dec 13 00:13:02 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:02.884078400Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68/shim.sock" debug=false pid=9289
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.055690205Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb/shim.sock" debug=false pid=9331
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.157726484Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a/shim.sock" debug=false pid=9369
	Dec 13 00:13:03 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:03.543647242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362/shim.sock" debug=false pid=9447
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.843191094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.843699899Z" level=warning msg="298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.879561607Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/298e7a7dd098b25cbab3cef388f0e73f1606510f154823df8461238e9d3c6fb3"
	Dec 13 00:13:04 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:04.879899111Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:05 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:05.070885968Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/421f067083d895b7d3fefde359a367003826a4698e75c2c3ebe6d62a458002a1/shim.sock" debug=false pid=9621
	Dec 13 00:13:05 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:05.515689839Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9/shim.sock" debug=false pid=9698
	Dec 13 00:13:19 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:19.062896950Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9/shim.sock" debug=false pid=9963
	Dec 13 00:13:22 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:22.060422403Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a/shim.sock" debug=false pid=10082
	Dec 13 00:13:30 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:30.463366147Z" level=info msg="Processing signal 'terminated'"
	Dec 13 00:13:30 running-upgrade-100500 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.559590327Z" level=info msg="shim reaped" id=5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.564109852Z" level=info msg="shim reaped" id=c8aa30bc9ddba000d0e4c494b1b5ace9ee931f6724f2fef2465e1f3034f25bf6
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.569712483Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.569988784Z" level=warning msg="5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5ad95879a099e789f2a2a4f092fa10d44ce3774be11f2f67c340a7800761d9e2/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.574470909Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.609913702Z" level=info msg="shim reaped" id=a88bdcaf0e6a7391fbfdbec71ad747e3dd9d3371c98d36e64fa81d807f20ceb6
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.629626309Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.636205945Z" level=info msg="shim reaped" id=fde4c6348e62e8dcd785ce0618a6adf9f04a2e06623bbccb81b01ed50f341a6c
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.661218582Z" level=info msg="shim reaped" id=1bc76be3e11ab77c94171c4902a1c76e21b536f1cc02d6a79ea0b75d24a8366b
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.663373094Z" level=info msg="shim reaped" id=f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.665632306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.688676532Z" level=info msg="shim reaped" id=69a6359d7f7c11d112023e2ddd421590e4859ff08f726650de480f0c655f49b0
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.698853387Z" level=info msg="shim reaped" id=4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.701273500Z" level=info msg="shim reaped" id=831037c0998655368e41fb78123263347b64e66ed87f276850ae608b5ad1c085
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.710749652Z" level=info msg="shim reaped" id=2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.712770863Z" level=info msg="shim reaped" id=421f067083d895b7d3fefde359a367003826a4698e75c2c3ebe6d62a458002a1
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718595795Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718832396Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.718991597Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.719413499Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.721878213Z" level=warning msg="4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4c6b31bcbf49738be0b19d63b5d1106e6c0af7b5bcb8ea0f4c6287f125756ccb/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.728854651Z" level=info msg="shim reaped" id=0e3ebea04d6f1288eed74b6bbea15273af2c0cd50dc5680f1e697acd1f11c693
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.733089474Z" level=info msg="shim reaped" id=dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.738427703Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.739133407Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.739630410Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.741420819Z" level=warning msg="2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2cabf2833e1dc5546a6a4354ce0741d44ba28dad026a2d9830508092c48ccb4a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.742479225Z" level=warning msg="f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f70e8f796987f94d9b7e926e77330e6e40c0d804017e845f99b0a0a795e3cb68/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746546547Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746573448Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.746848749Z" level=warning msg="dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dd4a5fa9714fa3f0955e4346288ef4d742f8607ec0cdc6b145eb31cee78e622a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.858812360Z" level=info msg="shim reaped" id=06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.860839271Z" level=info msg="shim reaped" id=8e3fe1ef03a34998ea3bc97a30e454343b3dced7c44280b5363a13e0969feb5a
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890114031Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890304732Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:31 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:31.890521133Z" level=warning msg="06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/06544ec42538456e5932c9653f7812684bdcc33fc3caf1c847b101ff0b247ca9/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:32 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:32.270292405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/518273dc69edb2f9b539a6d419ed705b8ab59871e83592fcc8679c97578d4676/shim.sock" debug=false pid=10986
	Dec 13 00:13:33 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:33.217831974Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9a5cdd1da5d3dc40c243f6c9e3b2d134575b94945c9f69afaf4d5f7e0f977499/shim.sock" debug=false pid=11031
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.423985710Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9cab93a2e51415cf8a51b421559883d158637d7baa551601c4f5f3b6dd1ec13b/shim.sock" debug=false pid=11089
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.828035615Z" level=info msg="shim reaped" id=be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.838141270Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.838604572Z" level=warning msg="be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/be90f2048f991b102830c4841a5e10812ff04c490ae2de8e93369c751dec6362/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.889173748Z" level=info msg="shim reaped" id=325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.901908418Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:35 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:35.902091219Z" level=warning msg="325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/325252bedb25dec2423618ec683aee129f13b37a06ef753d19a4b7b61ad20ce9/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:37 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:37.574770844Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/481c45c3985e38ff8a6c897547fa632a2f8029d5cb3944b647f0df5e2e1308bd/shim.sock" debug=false pid=11236
	Dec 13 00:13:39 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:39.357450570Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d76552925044d5ddad623906b67938aea8175770e48dba45d1f2dd70419cdbf/shim.sock" debug=false pid=11303
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.788138675Z" level=info msg="Container cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d failed to exit within 10 seconds of signal 15 - using the force"
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.944702829Z" level=info msg="shim reaped" id=cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.953219776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 00:13:40 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:40.953466077Z" level=warning msg="cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/cba8c6f64066ca1d9c9090729d27c995f003b14cb387f492cb0b31fe310a209d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016473721Z" level=info msg="Daemon shutdown complete"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016610622Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016710022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.016717222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.061800568Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.072622727Z" level=warning msg="4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.078738861Z" level=error msg="4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.078873961Z" level=error msg="Handler for POST /containers/4f56a9b3c2f105710fa2b1c0ad2c1b042ec6823df1604bb69b70c5f706cda3b4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.589558047Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.595532380Z" level=warning msg="92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.602110016Z" level=error msg="92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 13 00:13:41 running-upgrade-100500 dockerd[8237]: time="2023-12-13T00:13:41.602220617Z" level=error msg="Handler for POST /containers/92dd0ea88f313a5a491bd5d2bab785f94f16f851e5590328ab341f03f35065e6/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Succeeded.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 10986 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11031 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11089 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11236 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Found left-over process 11303 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.089439775Z" level=info msg="Starting up"
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092672592Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092809293Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092841793Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.092868293Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: time="2023-12-13T00:13:42.093303996Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 13 00:13:42 running-upgrade-100500 dockerd[11396]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 00:13:42 running-upgrade-100500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 00:13:42.179220    3552 out.go:239] * 
	* 
	W1213 00:13:42.181325    3552 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:13:42.182200    3552 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-100500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-13 00:13:42.6468073 +0000 UTC m=+7780.670467901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-100500 -n running-upgrade-100500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-100500 -n running-upgrade-100500: exit status 6 (12.5724495s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:13:42.776232    4372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1213 00:13:55.154562    4372 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-100500" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-100500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-100500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-100500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-100500: (53.9349143s)
--- FAIL: TestRunningBinaryUpgrade (442.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (300s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-665000 --driver=hyperv
E1213 00:00:53.185161   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-665000 --driver=hyperv: exit status 1 (4m59.7105474s)

                                                
                                                
-- stdout --
	* [NoKubernetes-665000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-665000 in cluster NoKubernetes-665000

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:58:10.105283   10480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-665000 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-665000 -n NoKubernetes-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-665000 -n NoKubernetes-665000: exit status 7 (293.9595ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:03:09.804866    4644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-665000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (300.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10800.616s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-375000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xjqcp" [9675ddb8-2251-4d83-982d-477e85454029] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 01:03:47.305764   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-159300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-xjqcp" [9675ddb8-2251-4d83-982d-477e85454029] Running
E1213 01:04:01.534686   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-375000\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (46m10s)
	TestNetworkPlugins/group (46m10s)
	TestNetworkPlugins/group/calico (13m55s)
	TestNetworkPlugins/group/custom-flannel (7m5s)
	TestNetworkPlugins/group/enable-default-cni (3m21s)
	TestNetworkPlugins/group/enable-default-cni/Start (3m21s)
	TestNetworkPlugins/group/false (6m8s)
	TestNetworkPlugins/group/false/NetCatPod (16s)
	TestStartStop (51m55s)
	TestStartStop/group (51m55s)

                                                
                                                
goroutine 3250 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 39 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc00046f6c0, 0xc000887b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc00044db80?, {0x4eeefc0, 0x2a, 0x2a}, {0xc000887be8?, 0xcfbfa5?, 0x4f10be0?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc00044db80)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00009bef0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000154200)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2035 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0008b4b40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1571 +0x53c
testing.tRunner(0xc00046fa00, 0x379d6f0)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1907
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2731 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00255f9e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2727
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 28 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 27
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 873 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000912610, 0x36)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020b43c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000912640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x3bef740, 0xc002379b00}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xd89140?, 0x3b9aca00, 0x0, 0x0?, 0xc00218ff80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xd8a045?, 0xc000918000?, 0xc000a14140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1997 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc00215d860, 0xc0026de048)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1758
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 151 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 150
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 875 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 150 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc000c75f50, 0xc0000ae3b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x1?, 0x1?, 0xc000c75fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000c75fd0?, 0xdcdf87?, 0xc002378240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2897 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00284b410, 0x1)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020b55c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00284b440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00208bf88?, {0x3bef740, 0xc002378030}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a220c0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdcdf25?, 0xc0028ea000?, 0xc000a221e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 149 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000848c90, 0x3c)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00089ff80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000848cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x6100000070?, {0x3bef740, 0xc002379800}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000054420?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdcdf25?, 0xc0008e1340?, 0xc000054660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3195 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc00205ff50, 0xc002aa7d98?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x1?, 0x1?, 0xc00205ffb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00205ffd0?, 0xdcdf87?, 0xc002c40180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 178 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009f4060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 170
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2961 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00284b440, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2000 [chan receive, 46 minutes]:
testing.(*testContext).waitParallel(0xc0008b4b40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000919ba0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000919ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000919ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000919ba0, 0xc002592280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2822 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a15040, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2820
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2707 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00284af50, 0x11)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00255f260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00284af80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00207df88?, {0x3bef740, 0xc0023041e0}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00207dfd0?, 0xdcdf87?, 0xc0007c09c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 179 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000848cc0, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 170
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2821 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002aa7200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2820
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 826 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020b44e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 858
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 874 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc00208df50, 0xc00208df30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x45?, 0xc00208dfb0?, 0xc00208df78?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0xc00063fb30?, 0xc000054fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc0007c06c0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 734 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x17ef8a8c028, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0007b3918, 0xc00203bbb8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0007b3900, 0x3c4, {0xc0020e2000?, 0x2000?, 0x0?}, 0x203bcc8?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0007b3900, 0xc00203bd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0007b3900)
	/usr/local/go/src/net/fd_windows.go:166 +0x54
net.(*TCPListener).accept(0xc002624300)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc002624300)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0005e21e0, {0x3c05840, 0xc002624300})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0005e21e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0005a5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 731
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 1444 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc00221df50, 0xc000a7e8f8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x1?, 0x1?, 0xc00221dfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00221dfd0?, 0xdcdf87?, 0xc0007448a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1474
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3196 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3195
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2777 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc000c7df50, 0x26a86cc?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0xd89180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc002dbf600?, 0xc000054fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2993 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00284b910, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000aec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00284b940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002347f88?, {0x3bef740, 0xc0009a7fb0}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002347fd0?, 0xdcdf87?, 0xc0024a4000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3105 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0000aed80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3101
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 827 [chan receive, 153 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000912640, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 858
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2778 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2777
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1064 [chan send, 149 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028eb1e0, 0xc002a925a0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 705
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3234 [IO wait]:
internal/poll.runtime_pollWait(0x17ef8a8c120, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xe1257c99aebc43be?, 0x59e918f0ac44a1bd?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0024d1918, 0x379dfb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc0024d1900, {0xc00247e000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc0024d1900, {0xc00247e000?, 0xf36?, 0xc000998880?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000122cd0, {0xc00247e000?, 0xc00247f0c5?, 0x5?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc002d7cab0, {0xc00247e000?, 0xc002d7cab0?, 0x0?})
	/usr/local/go/src/crypto/tls/conn.go:805 +0x3b
bytes.(*Buffer).ReadFrom(0xc000001ea8, {0x3befec0, 0xc002d7cab0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000001c00, {0x17ef8f28ef0?, 0xc000111da0}, 0xf3b?)
	/usr/local/go/src/crypto/tls/conn.go:827 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000001c00, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:625 +0x250
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0xc000001c00, {0xc00069c000, 0x1000, 0x11c4665?})
	/usr/local/go/src/crypto/tls/conn.go:1369 +0x158
bufio.(*Reader).Read(0xc0009f5320, {0xc00203eac0, 0x9, 0x4eb2210?})
	/usr/local/go/src/bufio/bufio.go:244 +0x197
io.ReadAtLeast({0x3bee4e0, 0xc0009f5320}, {0xc00203eac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00203eac0, 0x9, 0xc000600400?}, {0x3bee4e0?, 0xc0009f5320?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00203ea80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00208bf98)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2275 +0x11f
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000964f00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2170 +0x65
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3185
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:821 +0xcbe

                                                
                                                
goroutine 1441 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002aa68a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2349 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00284ab40, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1083 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028ebb80, 0xc002a93bc0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1082
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3201 [syscall, locked to thread]:
syscall.SyscallN(0x4f40dc0?, {0xc002195c28?, 0x0?, 0x3f7ac78?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002195c80?, 0x100000000c5e656?, 0xc000919520?, 0xc002195ce8?, 0xc51265?, 0xc00006c500?, 0xc000919520?, 0xc002195ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00238b200?, 0x200, 0x200?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00043ef00?, {0xc00238b200?, 0x0?, 0xc00238b200?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00043ef00, {0xc00238b200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e558, {0xc00238b200?, 0xc002195e68?, 0xc002195e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002912330, {0x3bee3c0, 0xc00009e558})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3bee440, 0xc002912330}, {0x3bee3c0, 0xc00009e558}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000964d80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2069
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3180 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00239ad00, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3121
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3138 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc00278df50, 0xc00016bcf0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0xc002626480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc002698dc0?, 0xc0028624e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1443 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000a14610, 0x32)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002aa6780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a14640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4eb5450?, {0x3bef740, 0xc00280a030}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0024ca6c0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdcdf25?, 0xc0008e1600?, 0xc0024ca7e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1474
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3122 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00284b940, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3101
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3096 [syscall, locked to thread]:
syscall.SyscallN(0x4f425c0?, {0xc00206fc28?, 0xf532cb?, 0x3f6ddd0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4ec05a0?, 0xc5e656?, 0x4f6bd80?, 0xc00206fce8?, 0xc513bd?, 0x17ed33f0598?, 0x2e87?, 0xc00276aea0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00231bd86?, 0x27a, 0xcf7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0005dd680?, {0xc00231bd86?, 0x6d3?, 0xc002314000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0005dd680, {0xc00231bd86, 0x27a, 0x27a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008a42b8, {0xc00231bd86?, 0x2b43e40?, 0xc00206fe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00276aea0, {0x3bee3c0, 0xc0008a42b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3bee440, 0xc00276aea0}, {0x3bee3c0, 0xc0008a42b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0001f3380?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3094
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3216 [syscall, locked to thread]:
syscall.SyscallN(0x4f419c0?, {0xc002791c28?, 0x0?, 0x3f7ac78?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002791c80?, 0x100000000c5e656?, 0xc002846820?, 0xc002791ce8?, 0xc51265?, 0xc000070f00?, 0xc002846820?, 0xc002791ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc000a6b93a?, 0x2c6, 0x400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000642500?, {0xc000a6b93a?, 0x0?, 0xc000a6b800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000642500, {0xc000a6b93a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00074a218, {0xc000a6b93a?, 0xc002791e68?, 0xc002791e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002185230, {0x3bee3c0, 0xc00074a218})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3bee440, 0xc002185230}, {0x3bee3c0, 0xc00074a218}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000964d80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2070
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2732 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00284af80, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2727
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3194 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00239acd0, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c2eae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00239ad00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00293bf90?, {0x3bef740, 0xc002cee000}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00293bfd0?, 0xdcdf87?, 0xc000055980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3121 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3c11e18, 0xc00087f810}, {0x3c05e70?, 0xc002e3d7c0}, 0x1, 0x0, 0xc00005fb80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/loop.go:91 +0x2dc
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3c11e18?, 0xc0004b24d0?}, 0x3b9aca00, 0xc00206bbf0?, 0x0?, 0xc002719e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:48 +0x98
k8s.io/minikube/test/integration.PodWait({0x3c11e18, 0xc0004b24d0}, 0xc002380340, {0xc0006c26f0, 0xc}, {0x2d4dab2, 0x7}, {0x2d54552, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc002380340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc002380340, 0xc0028fe600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2068
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1474 [chan receive, 139 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a14640, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1445 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1444
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2348 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002aa6f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1758 [chan receive, 46 minutes]:
testing.(*T).Run(0xc002847520, {0x2d49d44?, 0xcb806d?}, 0xc0026de048)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002847520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002847520, 0x379d4d0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2708 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc0022f9f50, 0xc0021a76d8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc000a4d1e0?, 0xc0007c0780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1999 [chan receive, 46 minutes]:
testing.(*testContext).waitParallel(0xc0008b4b40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000505d40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000505d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000505d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000505d40, 0xc002592200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3095 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x4f3f140?, {0xc002391c28?, 0x2391d60?, 0x3f7dbc8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc5e656?, 0xc00097b520?, 0xc002391ce8?, 0xc51265?, 0xc885dc?, 0xc00097b520?, 0xc002391ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc002482a43?, 0x5bd, 0xcf7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0005dd180?, {0xc002482a43?, 0x0?, 0xc002482800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0005dd180, {0xc002482a43, 0x5bd, 0x5bd})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008a42a0, {0xc002482a43?, 0xc002391e68?, 0xc002391e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00276ae70, {0x3bee3c0, 0xc0008a42a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3bee440, 0xc00276ae70}, {0x3bee3c0, 0xc0008a42a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002592580?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3094
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2066 [chan receive, 46 minutes]:
testing.(*testContext).waitParallel(0xc0008b4b40)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000104820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000104820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000104820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000104820, 0xc002592380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2212 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a14a40, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2070 [syscall, locked to thread]:
syscall.SyscallN(0x7ffafc4d4de0?, {0xc00259b0e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0x30?, 0x3be3018?, 0xc000852cd0?, 0x100c00259b1e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00074a210?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc0025ba1e0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0024269a0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0x2279?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0024269a0)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc000481d40, {0xc0006c2880, 0xd})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:638 +0xb3dc
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000481d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc000481d40, 0xc002592580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3097 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028eadc0, 0xc0027fcf00)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3094
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3139 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3138
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2979 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2978
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2371 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc00284ab10, 0x15)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002aa6de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00284ab40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xcb806d?, {0x3bef740, 0xc002304870}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000744960?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdcdf25?, 0xc0008e11e0?, 0xc000744a80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2349
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1907 [chan receive, 53 minutes]:
testing.(*T).Run(0xc00215d6c0, {0x2d49d44?, 0x47ede5079df4?}, 0x379d6f0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop(0xc00215d520?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00215d6c0, 0x379d518)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2069 [syscall, locked to thread]:
syscall.SyscallN(0x7ffafc4d4de0?, {0xc00271d0e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0x17ed370aa88?, 0x3be3018?, 0xc00271d168?, 0x100c00271d1e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00009e518?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002a40ae0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023c8420)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0x1c3?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0023c8420)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc000481520, {0xc00272c030, 0x15})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:451 +0x5175
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000481520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc000481520, 0xc002592500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2211 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c2f140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2373 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2372
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2068 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0004811e0, {0x2d52616?, 0x3be8c48?}, 0xc0028fe600)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0004811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x8c5
testing.tRunner(0xc0004811e0, 0xc002592480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3094 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffafc4d4de0?, {0xc002855ba8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc002855cc0?, 0xc002855bb0?, 0xc002855ce0?, 0x100c002855ca8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc0008a4298?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002972f60)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0028eadc0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0020981a0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0020981a0, 0xc0028eadc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0020981a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0020981a0, 0xc00276acf0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2001
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2001 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00046f860, {0x2d49d49?, 0x3be8c48?}, 0xc00276acf0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00046f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5f0
testing.tRunner(0xc00046f860, 0xc002592300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1997
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2978 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc0027cff50, 0xc0020b55b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x1?, 0x1?, 0xc0027cffb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0027cffd0?, 0xdcdf87?, 0xc002472bb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2203 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2202
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2202 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc002207f50, 0xc000625840?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0xc000067360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc0028ea6e0?, 0xc002862de0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2212
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2273 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0000aeba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2709 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2708
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2201 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a14a10, 0x17)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c2f020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a14a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002203f88?, {0x3bef740, 0xc002913680}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002203fd0?, 0xdcdf87?, 0x74617473205d3331?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2212
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2776 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a15010, 0x10)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002aa70e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a15040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002205f90?, {0x3bef740, 0xc0028b2000}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002c418c0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdcdf25?, 0xc0028eadc0?, 0xc002c419e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2372 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc002201f50, 0xc002c2e598?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc0022fe2c0?, 0xc000744d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2349
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2313 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000c5bdd0, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3beb320?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000aea80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c5be00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00231ff88?, {0x3bef740, 0xc002184390}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xc8821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc00231ffd0?, 0xdcdf87?, 0xc000c64900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2322 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c5be00, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2314 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c11fd8, 0xc0000541e0}, 0xc00234bf50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c11fd8, 0xc0000541e0}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c11fd8?, 0xc0000541e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdcdf25?, 0xc0028eab00?, 0xc0028633e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3179 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c2ec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3121
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2960 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020b56e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2315 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2314
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                    

Test pass (155/206)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.6
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.28.4/json-events 12.39
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.27
17 TestDownloadOnly/v1.29.0-rc.2/json-events 15.81
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.27
23 TestDownloadOnly/DeleteAll 1.29
24 TestDownloadOnly/DeleteAlwaysSucceeds 1.31
26 TestBinaryMirror 7.03
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
32 TestAddons/Setup 380.71
35 TestAddons/parallel/Ingress 67.73
36 TestAddons/parallel/InspektorGadget 26.76
37 TestAddons/parallel/MetricsServer 21.41
38 TestAddons/parallel/HelmTiller 34.5
40 TestAddons/parallel/CSI 105.14
41 TestAddons/parallel/Headlamp 35.1
42 TestAddons/parallel/CloudSpanner 21.9
43 TestAddons/parallel/LocalPath 87.44
44 TestAddons/parallel/NvidiaDevicePlugin 20.46
47 TestAddons/serial/GCPAuth/Namespaces 0.33
48 TestAddons/StoppedEnableDisable 46.71
49 TestCertOptions 488.38
51 TestDockerFlags 377.61
52 TestForceSystemdFlag 245.15
53 TestForceSystemdEnv 481.47
60 TestErrorSpam/start 17.24
61 TestErrorSpam/status 36.32
62 TestErrorSpam/pause 22.65
63 TestErrorSpam/unpause 22.68
64 TestErrorSpam/stop 46.48
67 TestFunctional/serial/CopySyncFile 0.04
68 TestFunctional/serial/StartWithProxy 199.07
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 112.32
71 TestFunctional/serial/KubeContext 0.14
72 TestFunctional/serial/KubectlGetPods 0.23
75 TestFunctional/serial/CacheCmd/cache/add_remote 27.29
76 TestFunctional/serial/CacheCmd/cache/add_local 10.12
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
78 TestFunctional/serial/CacheCmd/cache/list 0.3
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.42
80 TestFunctional/serial/CacheCmd/cache/cache_reload 36.28
81 TestFunctional/serial/CacheCmd/cache/delete 0.56
82 TestFunctional/serial/MinikubeKubectlCmd 0.48
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.43
84 TestFunctional/serial/ExtraConfig 122.91
85 TestFunctional/serial/ComponentHealth 0.17
86 TestFunctional/serial/LogsCmd 8.29
87 TestFunctional/serial/LogsFileCmd 10.42
88 TestFunctional/serial/InvalidService 20.74
94 TestFunctional/parallel/StatusCmd 42.88
98 TestFunctional/parallel/ServiceCmdConnect 27.29
99 TestFunctional/parallel/AddonsCmd 0.89
100 TestFunctional/parallel/PersistentVolumeClaim 44.16
102 TestFunctional/parallel/SSHCmd 21.06
103 TestFunctional/parallel/CpCmd 58.69
104 TestFunctional/parallel/MySQL 66.07
105 TestFunctional/parallel/FileSync 10.59
106 TestFunctional/parallel/CertSync 62.99
110 TestFunctional/parallel/NodeLabels 0.19
112 TestFunctional/parallel/NonActiveRuntimeDisabled 10.6
114 TestFunctional/parallel/License 3.25
115 TestFunctional/parallel/ServiceCmd/DeployApp 18.4
116 TestFunctional/parallel/ProfileCmd/profile_not_create 8.76
117 TestFunctional/parallel/ProfileCmd/profile_list 8.28
118 TestFunctional/parallel/ServiceCmd/List 13.48
119 TestFunctional/parallel/ProfileCmd/profile_json_output 8.76
120 TestFunctional/parallel/ServiceCmd/JSONOutput 13.28
123 TestFunctional/parallel/Version/short 0.25
124 TestFunctional/parallel/Version/components 8.74
126 TestFunctional/parallel/ImageCommands/ImageListShort 7.79
127 TestFunctional/parallel/ImageCommands/ImageListTable 7.62
128 TestFunctional/parallel/ImageCommands/ImageListJson 7.77
129 TestFunctional/parallel/ImageCommands/ImageListYaml 7.58
130 TestFunctional/parallel/ImageCommands/ImageBuild 29.34
131 TestFunctional/parallel/ImageCommands/Setup 4.09
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.91
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 25.51
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.86
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.24
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 26.95
146 TestFunctional/parallel/DockerEnv/powershell 44.82
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.89
148 TestFunctional/parallel/ImageCommands/ImageRemove 16.23
149 TestFunctional/parallel/UpdateContextCmd/no_changes 2.54
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.48
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.55
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.05
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.3
154 TestFunctional/delete_addon-resizer_images 0.44
155 TestFunctional/delete_my-image_image 0.2
156 TestFunctional/delete_minikube_cached_images 0.19
160 TestImageBuild/serial/Setup 188.11
161 TestImageBuild/serial/NormalBuild 9.07
162 TestImageBuild/serial/BuildWithBuildArg 8.62
163 TestImageBuild/serial/BuildWithDockerIgnore 7.55
164 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.54
167 TestIngressAddonLegacy/StartLegacyK8sCluster 209.56
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 38.88
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.35
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 82.78
174 TestJSONOutput/start/Command 198.64
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 7.84
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 7.63
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 33.64
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.55
202 TestMainNoArgs 0.25
206 TestMountStart/serial/StartWithMountFirst 146.28
207 TestMountStart/serial/VerifyMountFirst 10
208 TestMountStart/serial/StartWithMountSecond 157.58
209 TestMountStart/serial/VerifyMountSecond 10.14
210 TestMountStart/serial/DeleteFirst 65.97
211 TestMountStart/serial/VerifyMountPostDelete 9.8
212 TestMountStart/serial/Stop 22.69
213 TestMountStart/serial/RestartStopped 112.96
214 TestMountStart/serial/VerifyMountPostStop 9.43
221 TestMultiNode/serial/MultiNodeLabels 0.17
222 TestMultiNode/serial/ProfileList 7.53
229 TestPreload 490.78
230 TestScheduledStopWindows 321.58
237 TestKubernetesUpgrade 937.54
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.35
242 TestStoppedBinaryUpgrade/Setup 0.82
243 TestStoppedBinaryUpgrade/Upgrade 491.33
244 TestStoppedBinaryUpgrade/MinikubeLogs 9.44
253 TestPause/serial/Start 265.72
254 TestPause/serial/SecondStartNoReconfiguration 384.06
268 TestPause/serial/Pause 9.49
271 TestPause/serial/VerifyStatus 12.44
272 TestPause/serial/Unpause 7.98
273 TestPause/serial/PauseAgain 8.02
274 TestPause/serial/DeletePaused 37.59
275 TestPause/serial/VerifyDeletedResources 8.61
x
+
TestDownloadOnly/v1.16.0/json-events (19.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (19.6032427s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-524600
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-524600: exit status 85 (282.1919ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:04:02
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:04:02.232204   13800 out.go:296] Setting OutFile to fd 616 ...
	I1212 22:04:02.232939   13800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:04:02.233497   13800 out.go:309] Setting ErrFile to fd 620...
	I1212 22:04:02.233540   13800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 22:04:02.245771   13800 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1212 22:04:02.259051   13800 out.go:303] Setting JSON to true
	I1212 22:04:02.262575   13800 start.go:128] hostinfo: {"hostname":"minikube7","uptime":72239,"bootTime":1702346402,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:04:02.262575   13800 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:04:02.264589   13800 out.go:97] [download-only-524600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:04:02.265282   13800 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:04:02.265035   13800 notify.go:220] Checking for updates...
	W1212 22:04:02.265282   13800 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1212 22:04:02.266861   13800 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:04:02.267380   13800 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:04:02.268060   13800 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 22:04:02.269343   13800 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:04:02.270216   13800 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:04:07.802125   13800 out.go:97] Using the hyperv driver based on user configuration
	I1212 22:04:07.802183   13800 start.go:298] selected driver: hyperv
	I1212 22:04:07.802183   13800 start.go:902] validating driver "hyperv" against <nil>
	I1212 22:04:07.802183   13800 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:04:07.853845   13800 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1212 22:04:07.854931   13800 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 22:04:07.855009   13800 cni.go:84] Creating CNI manager for ""
	I1212 22:04:07.855009   13800 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 22:04:07.855009   13800 start_flags.go:323] config:
	{Name:download-only-524600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-524600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:04:07.856493   13800 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:04:07.857357   13800 out.go:97] Downloading VM boot image ...
	I1212 22:04:07.857921   13800 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:04:12.305277   13800 out.go:97] Starting control plane node download-only-524600 in cluster download-only-524600
	I1212 22:04:12.305416   13800 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 22:04:12.350021   13800 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 22:04:12.350551   13800 cache.go:56] Caching tarball of preloaded images
	I1212 22:04:12.350977   13800 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 22:04:12.351771   13800 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 22:04:12.351771   13800 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:12.425089   13800 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 22:04:17.234178   13800 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:17.235130   13800 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-524600"

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:04:21.828788    6524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (12.3861271s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-524600
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-524600: exit status 85 (264.4707ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:04:22
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:04:22.197357   10372 out.go:296] Setting OutFile to fd 596 ...
	I1212 22:04:22.198137   10372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:04:22.198137   10372 out.go:309] Setting ErrFile to fd 640...
	I1212 22:04:22.198137   10372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 22:04:22.212060   10372 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1212 22:04:22.220164   10372 out.go:303] Setting JSON to true
	I1212 22:04:22.222569   10372 start.go:128] hostinfo: {"hostname":"minikube7","uptime":72259,"bootTime":1702346402,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:04:22.222569   10372 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:04:22.223735   10372 out.go:97] [download-only-524600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:04:22.224797   10372 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:04:22.223735   10372 notify.go:220] Checking for updates...
	I1212 22:04:22.226165   10372 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:04:22.226886   10372 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:04:22.227497   10372 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 22:04:22.231067   10372 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:04:22.231401   10372 config.go:182] Loaded profile config "download-only-524600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1212 22:04:22.231401   10372 start.go:810] api.Load failed for download-only-524600: filestore "download-only-524600": Docker machine "download-only-524600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:04:22.232464   10372 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:04:22.232523   10372 start.go:810] api.Load failed for download-only-524600: filestore "download-only-524600": Docker machine "download-only-524600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:04:27.561828   10372 out.go:97] Using the hyperv driver based on existing profile
	I1212 22:04:27.561926   10372 start.go:298] selected driver: hyperv
	I1212 22:04:27.561926   10372 start.go:902] validating driver "hyperv" against &{Name:download-only-524600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-524600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:04:27.610636   10372 cni.go:84] Creating CNI manager for ""
	I1212 22:04:27.610636   10372 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:04:27.610636   10372 start_flags.go:323] config:
	{Name:download-only-524600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-524600 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:04:27.610636   10372 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:04:27.612317   10372 out.go:97] Starting control plane node download-only-524600 in cluster download-only-524600
	I1212 22:04:27.612364   10372 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 22:04:27.651547   10372 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 22:04:27.651665   10372 cache.go:56] Caching tarball of preloaded images
	I1212 22:04:27.652152   10372 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 22:04:27.653290   10372 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 22:04:27.653377   10372 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:27.721614   10372 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-524600"

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:04:34.496458    8540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (15.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-524600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (15.8130502s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (15.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-524600
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-524600: exit status 85 (267.0547ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-524600 | minikube7\jenkins | v1.32.0 | 12 Dec 23 22:04 UTC |          |
	|         | -p download-only-524600           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:04:34
	Running on machine: minikube7
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:04:34.838815     808 out.go:296] Setting OutFile to fd 636 ...
	I1212 22:04:34.839575     808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:04:34.839575     808 out.go:309] Setting ErrFile to fd 596...
	I1212 22:04:34.839575     808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1212 22:04:34.852222     808 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1212 22:04:34.860508     808 out.go:303] Setting JSON to true
	I1212 22:04:34.863186     808 start.go:128] hostinfo: {"hostname":"minikube7","uptime":72272,"bootTime":1702346402,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:04:34.863890     808 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:04:34.864867     808 out.go:97] [download-only-524600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:04:34.865893     808 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:04:34.865412     808 notify.go:220] Checking for updates...
	I1212 22:04:34.866656     808 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:04:34.867415     808 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:04:34.868193     808 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1212 22:04:34.869675     808 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:04:34.870995     808 config.go:182] Loaded profile config "download-only-524600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1212 22:04:34.871503     808 start.go:810] api.Load failed for download-only-524600: filestore "download-only-524600": Docker machine "download-only-524600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:04:34.871684     808 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:04:34.871684     808 start.go:810] api.Load failed for download-only-524600: filestore "download-only-524600": Docker machine "download-only-524600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:04:40.267115     808 out.go:97] Using the hyperv driver based on existing profile
	I1212 22:04:40.267115     808 start.go:298] selected driver: hyperv
	I1212 22:04:40.267678     808 start.go:902] validating driver "hyperv" against &{Name:download-only-524600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-524600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:04:40.320893     808 cni.go:84] Creating CNI manager for ""
	I1212 22:04:40.321453     808 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 22:04:40.321583     808 start_flags.go:323] config:
	{Name:download-only-524600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-524600 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:04:40.321768     808 iso.go:125] acquiring lock: {Name:mk8c92d435e858e61c16fb6de8aa69ec99268a5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:04:40.322464     808 out.go:97] Starting control plane node download-only-524600 in cluster download-only-524600
	I1212 22:04:40.322464     808 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 22:04:40.367463     808 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 22:04:40.367586     808 cache.go:56] Caching tarball of preloaded images
	I1212 22:04:40.368119     808 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 22:04:40.369275     808 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 22:04:40.369464     808 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:40.436818     808 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 22:04:45.640112     808 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:45.641108     808 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 22:04:46.596200     808 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1212 22:04:46.596719     808 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-524600\config.json ...
	I1212 22:04:46.599102     808 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 22:04:46.600257     808 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-524600"

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:04:50.578084   10476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2925622s)
--- PASS: TestDownloadOnly/DeleteAll (1.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-524600
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-524600: (1.3073123s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.31s)

                                                
                                    
x
+
TestBinaryMirror (7.03s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-613500 --alsologtostderr --binary-mirror http://127.0.0.1:50993 --driver=hyperv
aaa_download_only_test.go:307: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-613500 --alsologtostderr --binary-mirror http://127.0.0.1:50993 --driver=hyperv: (6.1417885s)
helpers_test.go:175: Cleaning up "binary-mirror-613500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-613500
--- PASS: TestBinaryMirror (7.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-310200
addons_test.go:927: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-310200: exit status 85 (294.7987ms)

                                                
                                                
-- stdout --
	* Profile "addons-310200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:05:01.717120   10448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-310200
addons_test.go:938: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-310200: exit status 85 (285.3992ms)

                                                
                                                
-- stdout --
	* Profile "addons-310200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:05:01.723199    6888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (380.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-310200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-310200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m20.7069404s)
--- PASS: TestAddons/Setup (380.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-310200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-310200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-310200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8497723e-3460-40dd-b7e1-48f6f9048981] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8497723e-3460-40dd-b7e1-48f6f9048981] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0324281s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.1849258s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-310200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1212 22:12:25.199880   14740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:285: (dbg) Run:  kubectl --context addons-310200 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 ip
addons_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 ip: (2.5606793s)
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 172.30.52.75
addons_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable ingress-dns --alsologtostderr -v=1: (15.633883s)
addons_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable ingress --alsologtostderr -v=1: (24.0254808s)
--- PASS: TestAddons/parallel/Ingress (67.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mqf74" [d8d5b875-bf2c-4e6e-8839-29b95c7833e4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0293282s
addons_test.go:840: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-310200
addons_test.go:840: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-310200: (21.7223576s)
--- PASS: TestAddons/parallel/InspektorGadget (26.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 25.8446ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-pl9km" [863bb503-8b23-4306-b284-ff91c5ee39e3] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0432455s
addons_test.go:414: (dbg) Run:  kubectl --context addons-310200 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable metrics-server --alsologtostderr -v=1: (16.1595889s)
--- PASS: TestAddons/parallel/MetricsServer (21.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (34.5s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 9.7293ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gggqt" [46c817d4-ae34-4a1d-be4b-7b0de0f5ee40] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0313881s
addons_test.go:472: (dbg) Run:  kubectl --context addons-310200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-310200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (15.0975863s)
addons_test.go:477: kubectl --context addons-310200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable helm-tiller --alsologtostderr -v=1: (14.3377617s)
--- PASS: TestAddons/parallel/HelmTiller (34.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (105.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 27.3034ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:563: (dbg) Done: kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.059436s)
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1170ed40-b3e5-4cc4-af55-e33a6818ce5c] Pending
helpers_test.go:344: "task-pv-pod" [1170ed40-b3e5-4cc4-af55-e33a6818ce5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1170ed40-b3e5-4cc4-af55-e33a6818ce5c] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0609151s
addons_test.go:583: (dbg) Run:  kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-310200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-310200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-310200 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-310200 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-310200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3780c11b-c8dc-4f63-851a-9b609d12f72d] Pending
helpers_test.go:344: "task-pv-pod-restore" [3780c11b-c8dc-4f63-851a-9b609d12f72d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3780c11b-c8dc-4f63-851a-9b609d12f72d] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0351884s
addons_test.go:625: (dbg) Run:  kubectl --context addons-310200 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-310200 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-310200 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.3126977s)
addons_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable volumesnapshots --alsologtostderr -v=1: (17.1028719s)
--- PASS: TestAddons/parallel/CSI (105.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-310200 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-310200 --alsologtostderr -v=1: (18.0488491s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-dv4bp" [2c25c793-bef8-46ba-9853-fd0bacdb1cbb] Pending
helpers_test.go:344: "headlamp-777fd4b855-dv4bp" [2c25c793-bef8-46ba-9853-fd0bacdb1cbb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-dv4bp" [2c25c793-bef8-46ba-9853-fd0bacdb1cbb] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0507833s
--- PASS: TestAddons/parallel/Headlamp (35.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-h6h7r" [834833da-430b-4b96-8e08-6c4a1326b572] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0406461s
addons_test.go:859: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-310200
addons_test.go:859: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-310200: (16.8424708s)
--- PASS: TestAddons/parallel/CloudSpanner (21.90s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (87.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-310200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:872: (dbg) Done: kubectl --context addons-310200 apply -f testdata\storage-provisioner-rancher\pvc.yaml: (1.1336453s)
addons_test.go:878: (dbg) Run:  kubectl --context addons-310200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9ad09be2-977a-4520-ab8c-87c75a0c1c5e] Pending
helpers_test.go:344: "test-local-path" [9ad09be2-977a-4520-ab8c-87c75a0c1c5e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9ad09be2-977a-4520-ab8c-87c75a0c1c5e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9ad09be2-977a-4520-ab8c-87c75a0c1c5e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.0399541s
addons_test.go:890: (dbg) Run:  kubectl --context addons-310200 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 ssh "cat /opt/local-path-provisioner/pvc-578e7ec9-4d8f-487f-8d65-81c325ac781a_default_test-pvc/file1"
addons_test.go:899: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 ssh "cat /opt/local-path-provisioner/pvc-578e7ec9-4d8f-487f-8d65-81c325ac781a_default_test-pvc/file1": (10.7384176s)
addons_test.go:911: (dbg) Run:  kubectl --context addons-310200 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-310200 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-310200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-windows-amd64.exe -p addons-310200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.8699337s)
--- PASS: TestAddons/parallel/LocalPath (87.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-prm68" [4af535c0-f663-4ff2-ab82-4c2c85b58970] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0403837s
addons_test.go:954: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-310200
addons_test.go:954: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-310200: (15.4151028s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-310200 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-310200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (46.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-310200
addons_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-310200: (34.9580958s)
addons_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-310200
addons_test.go:175: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-310200: (4.6349726s)
addons_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-310200
addons_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-310200: (4.5592659s)
addons_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-310200
addons_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-310200: (2.5584701s)
--- PASS: TestAddons/StoppedEnableDisable (46.71s)

                                                
                                    
x
+
TestCertOptions (488.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-416500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E1213 00:15:53.192082   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1213 00:16:22.644258   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1213 00:16:25.451883   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-416500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m8.6123035s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-416500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-416500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (11.1363925s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-416500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-416500 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-416500 -- "sudo cat /etc/kubernetes/admin.conf": (10.4555077s)
helpers_test.go:175: Cleaning up "cert-options-416500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-416500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-416500: (38.0338858s)
--- PASS: TestCertOptions (488.38s)

                                                
                                    
x
+
TestDockerFlags (377.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-359400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E1213 00:14:28.679998   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-359400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m14.5777044s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-359400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-359400 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.3145994s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-359400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-359400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.2744108s)
helpers_test.go:175: Cleaning up "docker-flags-359400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-359400
E1213 00:20:36.441693   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-359400: (42.4415409s)
--- PASS: TestDockerFlags (377.61s)

                                                
                                    
x
+
TestForceSystemdFlag (245.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-730500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-730500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m12.5325554s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-730500 ssh "docker info --format {{.CgroupDriver}}"
E1213 00:01:22.647013   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1213 00:01:25.452409   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-730500 ssh "docker info --format {{.CgroupDriver}}": (10.1015386s)
helpers_test.go:175: Cleaning up "force-systemd-flag-730500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-730500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-730500: (42.512455s)
--- PASS: TestForceSystemdFlag (245.15s)

                                                
                                    
x
+
TestForceSystemdEnv (481.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-866200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-866200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m7.8301418s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-866200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-866200 ssh "docker info --format {{.CgroupDriver}}": (10.2191884s)
helpers_test.go:175: Cleaning up "force-systemd-env-866200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-866200
E1213 00:25:53.185539   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-866200: (43.4170467s)
--- PASS: TestForceSystemdEnv (481.47s)

                                                
                                    
x
+
TestErrorSpam/start (17.24s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run: (5.6514009s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run: (5.7964147s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run
E1212 22:19:06.597461   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 start --dry-run: (5.7866693s)
--- PASS: TestErrorSpam/start (17.24s)

                                                
                                    
x
+
TestErrorSpam/status (36.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status: (12.4065201s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status: (11.9203509s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 status: (11.9893824s)
--- PASS: TestErrorSpam/status (36.32s)

                                                
                                    
x
+
TestErrorSpam/pause (22.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause: (7.6990955s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause: (7.4906454s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 pause: (7.457321s)
--- PASS: TestErrorSpam/pause (22.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause: (7.6909254s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause: (7.4822591s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 unpause: (7.5071432s)
--- PASS: TestErrorSpam/unpause (22.68s)

                                                
                                    
x
+
TestErrorSpam/stop (46.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop: (28.926661s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop: (8.9147358s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-471800 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-471800 stop: (8.638433s)
--- PASS: TestErrorSpam/stop (46.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\13816\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (199.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-347300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E1212 22:21:50.448164   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-347300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m19.0593097s)
--- PASS: TestFunctional/serial/StartWithProxy (199.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (112.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-347300 --alsologtostderr -v=8
E1212 22:26:22.615825   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-347300 --alsologtostderr -v=8: (1m52.3157629s)
functional_test.go:659: soft start took 1m52.3172522s for "functional-347300" cluster.
--- PASS: TestFunctional/serial/SoftStart (112.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-347300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:3.1: (9.5555794s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:3.3: (8.9495612s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cache add registry.k8s.io/pause:latest: (8.7884631s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-347300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4158857514\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-347300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4158857514\001: (1.6235692s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache add minikube-local-cache-test:functional-347300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cache add minikube-local-cache-test:functional-347300: (7.995664s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache delete minikube-local-cache-test:functional-347300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-347300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl images: (9.4182233s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.4071022s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.3332483s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:27:41.380977    1636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cache reload: (8.2197806s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.3235595s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 kubectl -- --context functional-347300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-347300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.43s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (122.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-347300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-347300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m2.9043305s)
functional_test.go:757: restart took 2m2.904793s for "functional-347300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (122.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-347300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 logs: (8.285389s)
--- PASS: TestFunctional/serial/LogsCmd (8.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3728262888\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3728262888\001\logs.txt: (10.4164723s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-347300 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-347300
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-347300: exit status 115 (16.5245534s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.30.55.40:32026 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:30:36.045239   15332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-347300 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 status: (13.51848s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.612858s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 status -o json: (14.7514544s)
--- PASS: TestFunctional/parallel/StatusCmd (42.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-347300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-347300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xbxmk" [f4f16231-05ac-463f-81e0-6cd21ff36c88] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xbxmk" [f4f16231-05ac-463f-81e0-6cd21ff36c88] Running
E1212 22:32:45.816191   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0326555s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 service hello-node-connect --url: (18.8423974s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.30.55.40:32512
functional_test.go:1674: http://172.30.55.40:32512: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xbxmk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.30.55.40:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.30.55.40:32512
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [16a1e52c-dfe2-49f1-af19-6626cf1b823a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0397728s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-347300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-347300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-347300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-347300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [12dbcc16-c5da-493c-a337-a6bbe3c39744] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [12dbcc16-c5da-493c-a337-a6bbe3c39744] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0262455s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-347300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-347300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-347300 delete -f testdata/storage-provisioner/pod.yaml: (1.4881984s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-347300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fe9242b9-13b8-43c8-a805-66791e4ffbbe] Pending
helpers_test.go:344: "sp-pod" [fe9242b9-13b8-43c8-a805-66791e4ffbbe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fe9242b9-13b8-43c8-a805-66791e4ffbbe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0265622s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-347300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "echo hello": (10.8004958s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "cat /etc/hostname": (10.2584179s)
--- PASS: TestFunctional/parallel/SSHCmd (21.06s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (58.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0240058s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /home/docker/cp-test.txt": (10.1060448s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cp functional-347300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1795590124\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cp functional-347300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1795590124\001\cp-test.txt: (10.4788274s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /home/docker/cp-test.txt"
E1212 22:31:22.613231   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /home/docker/cp-test.txt": (10.3659299s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0635075s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh -n functional-347300 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.6373227s)
--- PASS: TestFunctional/parallel/CpCmd (58.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (66.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-347300 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ngzq8" [290b3815-2243-414c-97da-a862deafc3c6] Pending
helpers_test.go:344: "mysql-859648c796-ngzq8" [290b3815-2243-414c-97da-a862deafc3c6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ngzq8" [290b3815-2243-414c-97da-a862deafc3c6] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.100164s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;": exit status 1 (308.9218ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;": exit status 1 (363.8934ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;": exit status 1 (333.7535ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;": exit status 1 (368.1233ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;": exit status 1 (431.2475ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-347300 exec mysql-859648c796-ngzq8 -- mysql -ppassword -e "show databases;"
E1212 22:36:22.621094   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (66.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/13816/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/test/nested/copy/13816/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/test/nested/copy/13816/hosts": (10.592982s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (62.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/13816.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/13816.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/13816.pem": (10.7542592s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/13816.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /usr/share/ca-certificates/13816.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /usr/share/ca-certificates/13816.pem": (10.6093266s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.6067084s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/138162.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/138162.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/138162.pem": (10.3182056s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/138162.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /usr/share/ca-certificates/138162.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /usr/share/ca-certificates/138162.pem": (10.2237975s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.4721681s)
--- PASS: TestFunctional/parallel/CertSync (62.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-347300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 ssh "sudo systemctl is-active crio": exit status 1 (10.5999435s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:32:06.904249    5308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (3.2303217s)
--- PASS: TestFunctional/parallel/License (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-347300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-347300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-spddj" [4ad21896-0fce-4fc0-ad34-a46ba76f7aef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-spddj" [4ad21896-0fce-4fc0-ad34-a46ba76f7aef] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0225773s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.2913195s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.0140243s)
functional_test.go:1314: Took "8.0141654s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "264.7793ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 service list: (13.4761659s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (8.4892396s)
functional_test.go:1365: Took "8.4895169s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "266.3516ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 service list -o json: (13.2822718s)
functional_test.go:1493: Took "13.282859s" to run "out/minikube-windows-amd64.exe -p functional-347300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 version -o=json --components: (8.7444668s)
--- PASS: TestFunctional/parallel/Version/components (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls --format short --alsologtostderr: (7.7888326s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-347300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-347300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-347300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-347300 image ls --format short --alsologtostderr:
W1212 22:34:25.177008   11436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1212 22:34:25.280032   11436 out.go:296] Setting OutFile to fd 748 ...
I1212 22:34:25.281427   11436 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:25.281523   11436 out.go:309] Setting ErrFile to fd 728...
I1212 22:34:25.281613   11436 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:25.303823   11436 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:25.304422   11436 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:25.304422   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:27.634529   11436 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:27.634529   11436 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:27.650920   11436 ssh_runner.go:195] Run: systemctl --version
I1212 22:34:27.650920   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:29.927013   11436 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:29.927254   11436 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:29.927388   11436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-347300 ).networkadapters[0]).ipaddresses[0]
I1212 22:34:32.592280   11436 main.go:141] libmachine: [stdout =====>] : 172.30.55.40

                                                
                                                
I1212 22:34:32.592280   11436 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:32.593322   11436 sshutil.go:53] new ssh client: &{IP:172.30.55.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-347300\id_rsa Username:docker}
I1212 22:34:32.709342   11436 ssh_runner.go:235] Completed: systemctl --version: (5.0583996s)
I1212 22:34:32.721656   11436 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls --format table --alsologtostderr: (7.6226892s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-347300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-347300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-347300 | b0d0655cf77de | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/localhost/my-image                | functional-347300 | abde9c3b9f2bd | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-347300 image ls --format table --alsologtostderr:
W1212 22:34:48.295885    9704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1212 22:34:48.379706    9704 out.go:296] Setting OutFile to fd 960 ...
I1212 22:34:48.395486    9704 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:48.395486    9704 out.go:309] Setting ErrFile to fd 756...
I1212 22:34:48.395486    9704 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:48.414921    9704 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:48.415895    9704 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:48.416780    9704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:50.673733    9704 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:50.673933    9704 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:50.688136    9704 ssh_runner.go:195] Run: systemctl --version
I1212 22:34:50.688136    9704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:52.953324    9704 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:52.953598    9704 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:52.953598    9704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-347300 ).networkadapters[0]).ipaddresses[0]
I1212 22:34:55.595616    9704 main.go:141] libmachine: [stdout =====>] : 172.30.55.40

                                                
                                                
I1212 22:34:55.595677    9704 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:55.596379    9704 sshutil.go:53] new ssh client: &{IP:172.30.55.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-347300\id_rsa Username:docker}
I1212 22:34:55.697326    9704 ssh_runner.go:235] Completed: systemctl --version: (5.0091677s)
I1212 22:34:55.708278    9704 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls --format json --alsologtostderr: (7.7719072s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-347300 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b0d0655cf77de8fc67338006d272d7e2a6c7a6977ad0899f5c2d64199dbfdc19","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-347300"],"size":"30"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76
bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gc
r.io/google-containers/addon-resizer:functional-347300"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-347300 image ls --format json --alsologtostderr:
W1212 22:34:32.943586   15256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1212 22:34:33.036257   15256 out.go:296] Setting OutFile to fd 788 ...
I1212 22:34:33.036957   15256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:33.036957   15256 out.go:309] Setting ErrFile to fd 812...
I1212 22:34:33.036957   15256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:33.053564   15256 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:33.053941   15256 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:33.054751   15256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:35.316745   15256 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:35.316845   15256 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:35.334030   15256 ssh_runner.go:195] Run: systemctl --version
I1212 22:34:35.334144   15256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:37.670825   15256 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:37.670979   15256 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:37.670979   15256 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-347300 ).networkadapters[0]).ipaddresses[0]
I1212 22:34:40.392680   15256 main.go:141] libmachine: [stdout =====>] : 172.30.55.40

                                                
                                                
I1212 22:34:40.392680   15256 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:40.393277   15256 sshutil.go:53] new ssh client: &{IP:172.30.55.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-347300\id_rsa Username:docker}
I1212 22:34:40.495143   15256 ssh_runner.go:235] Completed: systemctl --version: (5.161015s)
I1212 22:34:40.506663   15256 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls --format yaml --alsologtostderr: (7.57752s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-347300 image ls --format yaml --alsologtostderr:
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: b0d0655cf77de8fc67338006d272d7e2a6c7a6977ad0899f5c2d64199dbfdc19
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-347300
size: "30"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-347300
size: "32900000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1240000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: abde9c3b9f2bddcd9096038fd3ee58eb34011e9bcf3561f3cca92d4bb1f331ce
repoDigests: []
repoTags:
- docker.io/localhost/my-image:functional-347300
size: "1240000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-347300 image ls --format yaml --alsologtostderr:
W1212 22:34:40.718069   10136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1212 22:34:40.809568   10136 out.go:296] Setting OutFile to fd 876 ...
I1212 22:34:40.810558   10136 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:40.810558   10136 out.go:309] Setting ErrFile to fd 912...
I1212 22:34:40.810558   10136 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:40.828587   10136 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:40.828587   10136 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:40.831864   10136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:43.060462   10136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:43.060669   10136 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:43.075727   10136 ssh_runner.go:195] Run: systemctl --version
I1212 22:34:43.075727   10136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:45.298518   10136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:45.298518   10136 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:45.298634   10136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-347300 ).networkadapters[0]).ipaddresses[0]
I1212 22:34:47.941100   10136 main.go:141] libmachine: [stdout =====>] : 172.30.55.40

                                                
                                                
I1212 22:34:47.941298   10136 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:47.941467   10136 sshutil.go:53] new ssh client: &{IP:172.30.55.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-347300\id_rsa Username:docker}
I1212 22:34:48.060922   10136 ssh_runner.go:235] Completed: systemctl --version: (4.9851723s)
I1212 22:34:48.072096   10136 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (29.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-347300 ssh pgrep buildkitd: exit status 1 (10.0290916s)

                                                
                                                
** stderr ** 
	W1212 22:34:25.172760    1068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image build -t localhost/my-image:functional-347300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image build -t localhost/my-image:functional-347300 testdata\build --alsologtostderr: (11.5546497s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-347300 image build -t localhost/my-image:functional-347300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 249d3bf413be
Removing intermediate container 249d3bf413be
---> e646febe522a
Step 3/3 : ADD content.txt /
---> abde9c3b9f2b
Successfully built abde9c3b9f2b
Successfully tagged localhost/my-image:functional-347300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-347300 image build -t localhost/my-image:functional-347300 testdata\build --alsologtostderr:
W1212 22:34:35.204712   10608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1212 22:34:35.296188   10608 out.go:296] Setting OutFile to fd 580 ...
I1212 22:34:35.313297   10608 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:35.313297   10608 out.go:309] Setting ErrFile to fd 856...
I1212 22:34:35.313297   10608 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:34:35.337054   10608 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:35.355325   10608 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 22:34:35.357197   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:37.685959   10608 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:37.685959   10608 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:37.703208   10608 ssh_runner.go:195] Run: systemctl --version
I1212 22:34:37.703208   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-347300 ).state
I1212 22:34:39.947303   10608 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1212 22:34:39.947592   10608 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:39.947592   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-347300 ).networkadapters[0]).ipaddresses[0]
I1212 22:34:42.624456   10608 main.go:141] libmachine: [stdout =====>] : 172.30.55.40

                                                
                                                
I1212 22:34:42.624456   10608 main.go:141] libmachine: [stderr =====>] : 
I1212 22:34:42.624829   10608 sshutil.go:53] new ssh client: &{IP:172.30.55.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-347300\id_rsa Username:docker}
I1212 22:34:42.743567   10608 ssh_runner.go:235] Completed: systemctl --version: (5.0403357s)
I1212 22:34:42.743658   10608 build_images.go:151] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.883545499.tar
I1212 22:34:42.760793   10608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 22:34:42.815625   10608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.883545499.tar
I1212 22:34:42.823627   10608 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.883545499.tar: stat -c "%s %y" /var/lib/minikube/build/build.883545499.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.883545499.tar': No such file or directory
I1212 22:34:42.823627   10608 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.883545499.tar --> /var/lib/minikube/build/build.883545499.tar (3072 bytes)
I1212 22:34:42.906050   10608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.883545499
I1212 22:34:42.940985   10608 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.883545499 -xf /var/lib/minikube/build/build.883545499.tar
I1212 22:34:42.959442   10608 docker.go:346] Building image: /var/lib/minikube/build/build.883545499
I1212 22:34:42.970309   10608 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-347300 /var/lib/minikube/build/build.883545499
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1212 22:34:46.516101   10608 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-347300 /var/lib/minikube/build/build.883545499: (3.5457766s)
I1212 22:34:46.530901   10608 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.883545499
I1212 22:34:46.562500   10608 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.883545499.tar
I1212 22:34:46.577537   10608 build_images.go:207] Built localhost/my-image:functional-347300 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.883545499.tar
I1212 22:34:46.577537   10608 build_images.go:123] succeeded building to: functional-347300
I1212 22:34:46.577537   10608 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (7.7596127s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (29.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.8133476s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-347300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 15312: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 12584: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr: (17.7503335s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (7.7571343s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-347300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [424417a2-48de-4f38-9877-2f83ac089f0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [424417a2-48de-4f38-9877-2f83ac089f0c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0465773s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-347300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7636: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr: (11.3714265s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (7.8651715s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.6116633s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-347300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image load --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr: (14.8122111s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (8.2577058s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.95s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (44.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-347300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-347300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-347300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-347300": (29.4729126s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-347300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-347300 docker-env | Invoke-Expression ; docker images": (15.3316721s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (44.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image save gcr.io/google-containers/addon-resizer:functional-347300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image save gcr.io/google-containers/addon-resizer:functional-347300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.887206s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image rm gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image rm gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr: (8.2305722s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (7.9958728s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2: (2.5386661s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2: (2.4740497s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 update-context --alsologtostderr -v=2: (2.5516236s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.2561985s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image ls: (7.7942787s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-347300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-347300 image save --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-347300 image save --daemon gcr.io/google-containers/addon-resizer:functional-347300 --alsologtostderr: (10.8514326s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-347300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-347300
--- PASS: TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-347300
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-347300
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (188.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-247600 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-247600 --driver=hyperv: (3m8.1050883s)
--- PASS: TestImageBuild/serial/Setup (188.11s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-247600
E1212 22:40:53.164482   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.179200   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.194998   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.226018   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.272618   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.366886   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.541374   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:53.875485   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:54.528078   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:55.809611   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:40:58.376508   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-247600: (9.0697508s)
--- PASS: TestImageBuild/serial/NormalBuild (9.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-247600
E1212 22:41:03.509976   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-247600: (8.6199197s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-247600
E1212 22:41:13.752088   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-247600: (7.5453761s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-247600
E1212 22:41:22.622227   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-247600: (7.5410974s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (209.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-443200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E1212 22:42:15.215947   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:43:37.143428   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-443200 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (3m29.560695s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (209.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (38.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons enable ingress --alsologtostderr -v=5
E1212 22:45:53.162236   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons enable ingress --alsologtostderr -v=5: (38.8755555s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (38.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons enable ingress-dns --alsologtostderr -v=5
E1212 22:46:20.995987   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:46:22.622193   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons enable ingress-dns --alsologtostderr -v=5: (14.3479515s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (82.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-443200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-443200 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-443200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [428e6591-2134-42b1-be90-520377d2cdb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [428e6591-2134-42b1-be90-520377d2cdb1] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 29.0476379s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.3872577s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1212 22:46:56.040342    9540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-443200 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 ip
addons_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 ip: (2.5686777s)
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 172.30.56.9
addons_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons disable ingress-dns --alsologtostderr -v=1: (18.2810752s)
addons_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-443200 addons disable ingress --alsologtostderr -v=1: (21.3423581s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (82.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (198.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-323100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E1212 22:49:25.830808   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:50:53.171103   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 22:51:22.626056   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 22:51:25.439552   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.455842   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.471892   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.502694   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.549630   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.645301   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:25.817701   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:26.150953   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:26.800717   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:28.084039   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:30.650252   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:35.779591   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 22:51:46.029746   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-323100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m18.6411219s)
--- PASS: TestJSONOutput/start/Command (198.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-323100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-323100 --output=json --user=testUser: (7.8415139s)
--- PASS: TestJSONOutput/pause/Command (7.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-323100 --output=json --user=testUser
E1212 22:52:06.519619   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-323100 --output=json --user=testUser: (7.6291297s)
--- PASS: TestJSONOutput/unpause/Command (7.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-323100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-323100 --output=json --user=testUser: (33.6382197s)
--- PASS: TestJSONOutput/stop/Command (33.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.55s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-287300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-287300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (263.2195ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45b11e00-40b2-4823-9297-2dc50c59b8dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-287300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff921d94-f63f-4fa7-abd2-94579c399f48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f555abf7-e910-4189-b098-1895aff3ce26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"63f636a6-4405-4978-97d5-e3554985d330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"52c4ff9c-983e-4842-b572-5a1ae74b981f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"9eeb7f4a-be44-4d76-a421-1c791d36b432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41ef9286-5028-4c16-bcd3-5e4b3888b17b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:52:54.460211    6420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-287300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-287300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-287300: (1.2889072s)
--- PASS: TestErrorJSONOutput (1.55s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (146.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-459600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-459600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m25.2682126s)
--- PASS: TestMountStart/serial/StartWithMountFirst (146.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-459600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-459600 ssh -- ls /minikube-host: (10.0032306s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (157.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-459600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E1212 23:05:53.171051   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:06:05.838042   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:06:22.637961   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:06:25.431717   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-459600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m36.5809047s)
--- PASS: TestMountStart/serial/StartWithMountSecond (157.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (10.14s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host: (10.1433535s)
--- PASS: TestMountStart/serial/VerifyMountSecond (10.14s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (65.97s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-459600 --alsologtostderr -v=5
E1212 23:07:48.633052   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-459600 --alsologtostderr -v=5: (1m5.9710809s)
--- PASS: TestMountStart/serial/DeleteFirst (65.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.8s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host: (9.79665s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.80s)

                                                
                                    
x
+
TestMountStart/serial/Stop (22.69s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-459600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-459600: (22.6922112s)
--- PASS: TestMountStart/serial/Stop (22.69s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (112.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-459600
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-459600: (1m51.9504247s)
E1212 23:10:53.179562   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
--- PASS: TestMountStart/serial/RestartStopped (112.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-459600 ssh -- ls /minikube-host: (9.429793s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-392000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E1212 23:36:25.447726   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.5314098s)
--- PASS: TestMultiNode/serial/ProfileList (7.53s)

                                                
                                    
x
+
TestPreload (490.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-686300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E1212 23:45:53.186065   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:46:22.637243   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:46:25.450232   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
E1212 23:47:16.413981   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-686300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m18.5266089s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-686300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-686300 image pull gcr.io/k8s-minikube/busybox: (8.3128436s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-686300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-686300: (33.4032411s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-686300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E1212 23:50:53.186892   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
E1212 23:51:22.645535   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:51:25.456110   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-686300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m26.8907982s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-686300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-686300 image list: (7.1975258s)
helpers_test.go:175: Cleaning up "test-preload-686300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-686300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-686300: (36.4469881s)
--- PASS: TestPreload (490.78s)

                                                
                                    
x
+
TestScheduledStopWindows (321.58s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-667200 --memory=2048 --driver=hyperv
E1212 23:55:53.189273   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-347300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-667200 --memory=2048 --driver=hyperv: (3m9.2736567s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-667200 --schedule 5m
E1212 23:56:05.880058   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-667200 --schedule 5m: (10.5301628s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-667200 -n scheduled-stop-667200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-667200 -n scheduled-stop-667200: exit status 1 (10.0325591s)

                                                
                                                
** stderr ** 
	W1212 23:56:07.945405    4736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-667200 -- sudo systemctl show minikube-scheduled-stop --no-page
E1212 23:56:22.640310   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
E1212 23:56:25.448835   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-667200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5010096s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-667200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-667200 --schedule 5s: (10.4135808s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-667200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-667200: exit status 7 (2.4353043s)

                                                
                                                
-- stdout --
	scheduled-stop-667200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:57:37.902553    7036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-667200 -n scheduled-stop-667200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-667200 -n scheduled-stop-667200: exit status 7 (2.4301247s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:57:40.350881    3888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-667200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-667200
E1212 23:57:48.667927   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-667200: (26.9494326s)
--- PASS: TestScheduledStopWindows (321.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (937.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (6m23.567842s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-120400
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-120400: (30.0306543s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-120400 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-120400 status --format={{.Host}}: exit status 7 (3.0333123s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:09:08.503948   15172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (3m42.8621352s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-120400 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (264.0954ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-120400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:12:54.525683    2272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-120400
	    minikube start -p kubernetes-upgrade-120400 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1204002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-120400 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-120400 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m22.210849s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-120400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-120400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-120400: (35.4083764s)
--- PASS: TestKubernetesUpgrade (937.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-665000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-665000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (352.1315ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-665000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 23:58:09.745874    4900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (491.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1533477787.exe start -p stopped-upgrade-632600 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1533477787.exe start -p stopped-upgrade-632600 --memory=2200 --vm-driver=hyperv: (4m12.2986367s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1533477787.exe -p stopped-upgrade-632600 stop
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.6.2.1533477787.exe -p stopped-upgrade-632600 stop: (29.5139161s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-632600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:211: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-632600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (3m29.5159774s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (491.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-632600
E1213 00:11:25.452604   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-443200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-632600: (9.4415899s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.44s)

                                                
                                    
x
+
TestPause/serial/Start (265.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-804400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E1213 00:12:45.898869   13816 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-310200\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-804400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (4m25.7243056s)
--- PASS: TestPause/serial/Start (265.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (384.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-804400 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-804400 --alsologtostderr -v=1 --driver=hyperv: (6m24.0179192s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (384.06s)

                                                
                                    
x
+
TestPause/serial/Pause (9.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-804400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-804400 --alsologtostderr -v=5: (9.4937357s)
--- PASS: TestPause/serial/Pause (9.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-804400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-804400 --output=json --layout=cluster: exit status 2 (12.4377806s)

                                                
                                                
-- stdout --
	{"Name":"pause-804400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-804400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W1213 00:23:06.713478    5056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.98s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-804400 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-804400 --alsologtostderr -v=5: (7.9750712s)
--- PASS: TestPause/serial/Unpause (7.98s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-804400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-804400 --alsologtostderr -v=5: (8.0173749s)
--- PASS: TestPause/serial/PauseAgain (8.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (37.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-804400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-804400 --alsologtostderr -v=5: (37.5909865s)
--- PASS: TestPause/serial/DeletePaused (37.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (8.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.612792s)
--- PASS: TestPause/serial/VerifyDeletedResources (8.61s)

                                                
                                    

Test skip (32/206)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-347300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-347300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4260: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-347300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-347300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0496056s)

                                                
                                                
-- stdout --
	* [functional-347300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:31:42.519888    1968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 22:31:42.624458    1968 out.go:296] Setting OutFile to fd 968 ...
	I1212 22:31:42.625462    1968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:31:42.625462    1968 out.go:309] Setting ErrFile to fd 840...
	I1212 22:31:42.625462    1968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:31:42.647456    1968 out.go:303] Setting JSON to false
	I1212 22:31:42.652458    1968 start.go:128] hostinfo: {"hostname":"minikube7","uptime":73900,"bootTime":1702346402,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:31:42.652458    1968 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:31:42.653462    1968 out.go:177] * [functional-347300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:31:42.654459    1968 notify.go:220] Checking for updates...
	I1212 22:31:42.655460    1968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:31:42.655460    1968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:31:42.656464    1968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:31:42.657457    1968 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:31:42.658460    1968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:31:42.659460    1968 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:31:42.660458    1968 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-347300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-347300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0578744s)

                                                
                                                
-- stdout --
	* [functional-347300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1212 22:31:37.471967   13740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1212 22:31:37.560884   13740 out.go:296] Setting OutFile to fd 688 ...
	I1212 22:31:37.561807   13740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:31:37.561807   13740 out.go:309] Setting ErrFile to fd 924...
	I1212 22:31:37.561807   13740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:31:37.592859   13740 out.go:303] Setting JSON to false
	I1212 22:31:37.597860   13740 start.go:128] hostinfo: {"hostname":"minikube7","uptime":73895,"bootTime":1702346402,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3758 Build 19045.3758","kernelVersion":"10.0.19045.3758 Build 19045.3758","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W1212 22:31:37.597860   13740 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 22:31:37.598859   13740 out.go:177] * [functional-347300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3758 Build 19045.3758
	I1212 22:31:37.599853   13740 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I1212 22:31:37.599853   13740 notify.go:220] Checking for updates...
	I1212 22:31:37.601864   13740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:31:37.602861   13740 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I1212 22:31:37.603878   13740 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:31:37.603878   13740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:31:37.605863   13740 config.go:182] Loaded profile config "functional-347300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 22:31:37.606862   13740 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard